GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
1,696,133
0
0
0
0
4
false
6
2009-11-08T10:29:00.000
2
5
0
Does WordNet have "levels"? (NLP)
1,695,971
0.07983
python,text,nlp,words,wordnet
In order to get levels, you need to predefine the content of each level. An ontology often defines these as the immediate IS_A children of a specific concept, but if that is absent, you need to develop a method of that yourself. The next step is to put a priority on each concept, in case you want to present only one category for each word. The priority can be done in multiple ways, for instance as the count of IS_A relations between the category and the word, or manually selected priorities for each category. For each word, you can then pick the category with the highest priority. For instance, you may want meat to be "food" rather than chemical substance. You may also want to pick some words, that change priority if they are in the path. For instance, if you want some chemicals which are also food, to be announced as chemicals, but others should still be food.
For example... Chicken is an animal. Burrito is a food. WordNet allows you to do "is-a"...the hiearchy feature. However, how do I know when to stop travelling up the tree? I want a LEVEL. That is consistent. For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing "burrito" as a "thing" is too broad, yet "mexican wrapped food" is too specific. I want to go up the hiearchy or down..until the right LEVEL.
0
1
2,585
0
1,717,952
0
0
0
0
4
false
6
2009-11-08T10:29:00.000
0
5
0
Does WordNet have "levels"? (NLP)
1,695,971
0
python,text,nlp,words,wordnet
WordNet's hypernym tree ends with a single root synset for the word "entity". If you are using WordNet's C library, then you can get a while recursive structure for a synset's ancestors using traceptrs_ds, and you can get the whole synset tree by recursively following nextss and ptrlst pointers until you hit null pointers.
For example... Chicken is an animal. Burrito is a food. WordNet allows you to do "is-a"...the hiearchy feature. However, how do I know when to stop travelling up the tree? I want a LEVEL. That is consistent. For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing "burrito" as a "thing" is too broad, yet "mexican wrapped food" is too specific. I want to go up the hiearchy or down..until the right LEVEL.
0
1
2,585
0
1,698,380
0
0
0
0
4
false
6
2009-11-08T10:29:00.000
6
5
0
Does WordNet have "levels"? (NLP)
1,695,971
1
python,text,nlp,words,wordnet
[Please credit Pete Kirkham, he first came with the reference to SUMO which may well answer the question asked by Alex, the OP] (I'm just providing a complement of information here; I started in a comment field but soon ran out of space and layout capabilites...) Alex: Most of SUMO is science or engineering? It does not contain every-day words like foods, people, cars, jobs, etc? Pete K: SUMO is an upper ontology. The mid-level ontologies (where you would find concepts between 'thing' and 'beef burrito') listed on the page don't include food, but reflect the sorts of organisations which fund the project. There is a mid-level ontology for people. There's also one for industries (and hence jobs), including food suppliers, but no mention of burritos if you grep it. My two cents 100% of WordNet (3.0 i.e. the latest, as well as older versions) is mapped to SUMO, and that may just be what Alex need. The mid-level ontologies associated with SUMO (or rather with MILO) are effectively in specific domains, and do not, at this time, include Foodstuff, but since WordNet does (include all -well, many of- these everyday things) you do not need to leverage any formal ontology "under" SUMO, but instead use Sumo's WordNet mapping (possibly in addition to WordNet, which, again, is not an ontology but with its informal and loose "hierarchy" may also help. Some difficulty may arise, however, from two area (and then some ;-) ?): the SUMO ontology's "level" may not be the level you'd have in mind for your particular application. For example while "Burrito" brings "Food", at top level entity in SUMO "Chicken" brings well "Chicken" which only through a long chain finds "Animal" (specifically: Chicken->Poultry->Bird->Warm_Blooded_Vertebrae->Vertebrae->Animal). Wordnet's coverage and metadata is impressive, but with regards to the mid-level concepts can be a bit inconsistent. For example "our" Burrito's hypernym is appropriately "Dish", which provides it with circa 140 food dishes, which includes generics such as "Soup" or "Casserole" as well as "Chicken Marengo" (but omitting say "Chicken Cacciatore") My point, in bringing up these issues, is not to criticize WordNet or SUMO and its related ontologies, but rather to illustrate simply some of the challenges associated with building ontology, particularly at the mid-level. Regardless of some possible flaws and lackings of a solution based on SUMO and WordNet, a pragmatic use of these frameworks may well "fit the bill" (85% of the time)
For example... Chicken is an animal. Burrito is a food. WordNet allows you to do "is-a"...the hiearchy feature. However, how do I know when to stop travelling up the tree? I want a LEVEL. That is consistent. For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing "burrito" as a "thing" is too broad, yet "mexican wrapped food" is too specific. I want to go up the hiearchy or down..until the right LEVEL.
0
1
2,585
0
35,500,598
0
0
0
0
2
false
22
2009-11-08T17:54:00.000
1
11
0
Algorithm for solving Sudoku
1,697,334
0.01818
python,algorithm,sudoku
Not gonna write full code, but I did a sudoku solver a long time ago. I found that it didn't always solve it (the thing people do when they have a newspaper is incomplete!), but now think I know how to do it. Setup: for each square, have a set of flags for each number showing the allowed numbers. Crossing out: just like when people on the train are solving it on paper, you can iteratively cross out known numbers. Any square left with just one number will trigger another crossing out. This will either result in solving the whole puzzle, or it will run out of triggers. This is where I stalled last time. Permutations: there's only 9! = 362880 ways to arrange 9 numbers, easily precomputed on a modern system. All of the rows, columns, and 3x3 squares must be one of these permutations. Once you have a bunch of numbers in there, you can do what you did with the crossing out. For each row/column/3x3, you can cross out 1/9 of the 9! permutations if you have one number, 1/(8*9) if you have 2, and so forth. Cross permutations: Now you have a bunch of rows and columns with sets of potential permutations. But there's another constraint: once you set a row, the columns and 3x3s are vastly reduced in what they might be. You can do a tree search from here to find a solution.
I want to write a code in python to solve a sudoku puzzle. Do you guys have any idea about a good algorithm for this purpose. I read somewhere in net about a algorithm which solves it by filling the whole box with all possible numbers, then inserts known values into the corresponding boxes.From the row and coloumn of known values the known value is removed.If you guys know any better algorithm than this please help me to write one. Also I am confused that how i should read the known values from the user. It is really hard to enter the values one by one through console. Any easy way for this other than using gui?
0
1
147,751
0
1,697,407
0
0
0
0
2
false
22
2009-11-08T17:54:00.000
5
11
0
Algorithm for solving Sudoku
1,697,334
0.090659
python,algorithm,sudoku
I wrote a simple program that solved the easy ones. It took its input from a file which was just a matrix with spaces and numbers. The datastructure to solve it was just a 9 by 9 matrix of a bit mask. The bit mask would specify which numbers were still possible on a certain position. Filling in the numbers from the file would reduce the numbers in all rows/columns next to each known location. When that is done you keep iterating over the matrix and reducing possible numbers. If each location has only one option left you're done. But there are some sudokus that need more work. For these ones you can just use brute force: try all remaining possible combinations until you find one that works.
I want to write a code in python to solve a sudoku puzzle. Do you guys have any idea about a good algorithm for this purpose. I read somewhere in net about a algorithm which solves it by filling the whole box with all possible numbers, then inserts known values into the corresponding boxes.From the row and coloumn of known values the known value is removed.If you guys know any better algorithm than this please help me to write one. Also I am confused that how i should read the known values from the user. It is really hard to enter the values one by one through console. Any easy way for this other than using gui?
0
1
147,751
0
1,698,110
0
0
0
0
1
false
3
2009-11-08T21:44:00.000
4
3
0
Neural Networks in Python without using any readymade libraries...i.e., from first principles..help!
1,698,017
0.26052
python,scipy,neural-network
If you're familiar with Matlab, check out the excellent Python libraries numpy, scipy, and matplotlib. Together, they provide the most commonly used subset of Matlab functions.
I am trying to learn programming in python and am also working against a deadline for setting up a neural network which looks like it's going to feature multidirectional associative memory and recurrent connections among other things. While the mathematics for all these things can be accessed from various texts and sources (and is accessible, so to speak), as a newbie to python (and programming as a profession) I am kinda floating in space looking for the firmament as I try to 'implement' things!! Information on any good online tutorials on constructing neural networks ab initio will be greatly appreciated :) In the meantime I am moonlighting as a MatLab user to nurse the wounds caused by Python :)
0
1
4,234
0
1,705,913
0
0
0
0
2
false
10
2009-11-10T05:33:00.000
0
11
0
Finding cycle of 3 nodes ( or triangles) in a graph
1,705,824
0
python,graph,geometry,cycle
Do you need to find 'all' of the 'triangles', or just 'some'/'any'? Or perhaps you just need to test whether a particular node is part of a triangle? The test is simple - given a node A, are there any two connected nodes B & C that are also directly connected. If you need to find all of the triangles - specifically, all groups of 3 nodes in which each node is joined to the other two - then you need to check every possible group in a very long running 'for each' loop. The only optimisation is ensuring that you don't check the same 'group' twice, e.g. if you have already tested that B & C aren't in a group with A, then don't check whether A & C are in a group with B.
I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple "for" loop) is not very efficient. I am using python for my programming, if these is some inbuilt modules for handling these problems, please let me know. If someone knows any algorithm which can be used for finding triangles in graphs, kindly reply back.
0
1
18,355
0
1,705,866
0
0
0
0
2
false
10
2009-11-10T05:33:00.000
1
11
0
Finding cycle of 3 nodes ( or triangles) in a graph
1,705,824
0.01818
python,graph,geometry,cycle
Even though it isn't efficient, you may want to implement a solution, so use the loops. Write a test so you can get an idea as to how long it takes. Then, as you try new approaches you can do two things: 1) Make certain that the answer remains the same. 2) See what the improvement is. Having a faster algorithm that misses something is probably going to be worse than having a slower one. Once you have the slow test, you can see if you can do this in parallel and see what the performance increase is. Then, you can see if you can mark all nodes that have less than 3 vertices. Ideally, you may want to shrink it down to just 100 or so first, so you can draw it, and see what is happening graphically. Sometimes your brain will see a pattern that isn't as obvious when looking at algorithms.
I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple "for" loop) is not very efficient. I am using python for my programming, if these is some inbuilt modules for handling these problems, please let me know. If someone knows any algorithm which can be used for finding triangles in graphs, kindly reply back.
0
1
18,355
0
50,057,490
0
0
0
0
1
false
76
2009-11-10T12:57:00.000
3
13
0
Call Python function from MATLAB
1,707,780
0.046121
python,matlab,language-interoperability
Like Daniel said you can run python commands directly from Matlab using the py. command. To run any of the libraries you just have to make sure Malab is running the python environment where you installed the libraries: On a Mac: Open a new terminal window; type: which python (to find out where the default version of python is installed); Restart Matlab; type: pyversion('/anaconda2/bin/python'), in the command line (obviously replace with your path). You can now run all the libraries in your default python installation. For example: py.sys.version; py.sklearn.cluster.dbscan
I need to call a Python function from MATLAB. how can I do this?
0
1
132,733
0
1,730,684
0
0
0
0
1
false
2
2009-11-13T08:43:00.000
1
3
0
Just Curious about Python+Numpy to Realtime Gesture Recognition
1,727,950
0.066568
python,c,numpy,gesture-recognition
I think the answer depends on three things: how well you code in Matlab, how well you code in Python/Numpy, and your algorithm. Both Matlab and Python can be fast for number crunching if you're diligent about vectorizing everything and using library calls. If your Matlab code is already very good I would be surprised if you saw much performance benefit moving to Numpy unless there's some specific idiom you can use to your advantage. You might not even see a large benefit moving to C. I this case your effort would likely be better spent tuning your algorithm. If your Matlab code isn't so good you could 1) write better Matlab code, 2) rewrite in good Numpy code, or 3) rewrite in C.
i 'm just finish labs meeting with my advisor, previous code is written in matlab and it run offline mode not realtime mode, so i decide to convert to python+numpy (in offline version) but after labs meeting, my advisor raise issue about speed of realtime recognition, so i have doubt about speed of python+numpy to do this project. or better in c? my project is about using electronic glove (2x sensors) to get realtime data and do data processing, recognition process
0
1
914
0
1,777,708
0
0
0
0
1
false
18
2009-11-21T18:24:00.000
13
8
0
replacing Matlab with python
1,776,290
1
python,matlab
I've been programming with Matlab for about 15 years, and with Python for about 10. It usually breaks down this way: If you can satisfy the following conditions: 1. You primarily use matrices and matrix operations 2. You have the money for a Matlab license 3. You work on a platform that mathworks supports Then, by all means, use Matlab. Otherwise, if you have data structures other than matrices, want an open-source option that allows you to deliver solutions without worrying about licenses, and need to build on platforms that mathworks does not support; then, go with Python. The matlab language is clunky, but the user interface is slick. The Python language is very nice -- with iterators, generators, and functional programming tools that matlab lacks; however, you will have to pick and choose to put together a nice slick interface if you don't like (or can't use) SAGE. I hope that helps.
i am a engineering student and i have to do a lot of numerical processing, plots, simulations etc. The tool that i use currently is Matlab. I use it in my university computers for most of my assignments. However, i want to know what are the free options available. i have done some research and many have said that python is a worthy replacement for matlab in various scenarios. i want to know how to do all this with python. i am using a mac so how do i install the different python packages. what are those packages? is it really a viable alternative? what are the things i can and cannot do using this python setup?
0
1
15,393
0
1,816,714
0
0
0
0
1
false
10
2009-11-23T15:03:00.000
0
5
0
Any python Support Vector Machine library around that allows online learning?
1,783,669
0
python,artificial-intelligence,machine-learning,svm
Why would you want to train it online? Adding trainings instances would usually require to re-solve the quadratic programming problem associated with the SVM. A way to handle this is to train a SVM in batch mode, and when new data is available, check if these data points are in the [-1, +1] margin of the hyperplane. If so, retrain the SVM using all the old support vectors, and the new training data that falls in the margin. Of course, the results can be slightly different compared to batch training on all your data, as some points can be discarded that would be support vectors later on. So again, why do you want to perform online training of you SVM?
I do know there are some libraries that allow to use Support vector Machines from python code, but I am looking specifically for libraries that allow one to teach it online (this is, without having to give it all the data at once). Are there any?
0
1
5,533
0
72,299,692
0
1
0
0
1
false
10
2009-11-26T12:54:00.000
0
4
0
replace the NaN value zero after an operation with arrays
1,803,516
0
python
import numpy alpha = numpy.array([1,2,3,numpy.nan,4]) n = numpy.nan_to_num(alpha) print(n) output : array([1., 2., 3., 0., 4.])
how can I replace the NaN value in an array, zero if an operation is performed such that as a result instead of the NaN value is zero operations as 0 / 0 = NaN can be replaced by 0
0
1
32,496
0
1,881,867
0
0
0
0
1
false
1
2009-12-10T15:41:00.000
0
7
0
How to unpickle from C code
1,881,851
0
python,c
take a look at module struct ?
I have a python code computing a matrix, and I would like to use this matrix (or array, or list) from C code. I wanted to pickle the matrix from the python code, and unpickle it from c code, but I could not find documentation or example on how to do this. I found something about marshalling data, but nothing about unpickling from C. Edit : Commenters Peter H asked if I was working with numpy arrays. The answer is yes.
0
1
1,690
0
1,897,910
0
0
0
0
2
false
11
2009-12-13T21:17:00.000
0
5
0
Test if point is in some rectangle
1,897,779
0
python,algorithm,point
Your R-tree approach is the best approach I know of (that's the approach I would choose over quadtrees, B+ trees, or BSP trees, as R-trees seem convenient to build in your case). Caveat: I'm no expert, even though I remember a few things from my senior year university class of algorithmic!
I have a large collection of rectangles, all of the same size. I am generating random points that should not fall in these rectangles, so what I wish to do is test if the generated point lies in one of the rectangles, and if it does, generate a new point. Using R-trees seem to work, but they are really meant for rectangles and not points. I could use a modified version of a R-tree algorithm which works with points too, but I'd rather not reinvent the wheel, if there is already some better solution. I'm not very familiar with data-structures, so maybe there already exists some structure that works for my problem? In summary, basically what I'm asking is if anyone knows of a good algorithm, that works in Python, that can be used to check if a point lies in any rectangle in a given set of rectangles. edit: This is in 2D and the rectangles are not rotated.
0
1
13,398
0
1,897,962
0
0
0
0
2
false
11
2009-12-13T21:17:00.000
3
5
0
Test if point is in some rectangle
1,897,779
0.119427
python,algorithm,point
For rectangles that are aligned with the axes, you only need two points (four numbers) to identify the rectangle - conventionally, bottom-left and top-right corners. To establish whether a given point (Xtest, Ytest) overlaps with a rectangle (XBL, YBL, XTR, YTR) by testing both: Xtest >= XBL && Xtest <= XTR Ytest >= YBL && Ytest <= YTR Clearly, for a large enough set of points to test, this could be fairly time consuming. The question, then, is how to optimize the testing. Clearly, one optimization is to establish the minimum and maximum X and Y values for the box surrounding all the rectangles (the bounding box): a swift test on this shows whether there is any need to look further. Xtest >= Xmin && Xtest <= Xmax Ytest >= Ymin && Ytest <= Ymax Depending on how much of the total surface area is covered with rectangles, you might be able to find non-overlapping sub-areas that contain rectangles, and you could then avoid searching those sub-areas that cannot contain a rectangle overlapping the point, again saving comparisons during the search at the cost of pre-computation of suitable data structures. If the set of rectangles is sparse enough, there may be no overlapping, in which case this degenerates into the brute-force search. Equally, if the set of rectangles is so dense that there are no sub-ranges in the bounding box that can be split up without breaking rectangles. However, you could also arbitrarily break up the bounding area into, say, quarters (half in each direction). You would then use a list of boxes which would include more boxes than in the original set (two or four boxes for each box that overlapped one of the arbitrary boundaries). The advantage of this is that you could then eliminate three of the four quarters from the search, reducing the amount of searching to be done in total - at the expense of auxilliary storage. So, there are space-time trade-offs, as ever. And pre-computation versus search trade-offs. If you are unlucky, the pre-computation achieves nothing (for example, there are two boxes only, and they don't overlap on either axis). On the other hand, it could achieve considerable search-time benefit.
I have a large collection of rectangles, all of the same size. I am generating random points that should not fall in these rectangles, so what I wish to do is test if the generated point lies in one of the rectangles, and if it does, generate a new point. Using R-trees seem to work, but they are really meant for rectangles and not points. I could use a modified version of a R-tree algorithm which works with points too, but I'd rather not reinvent the wheel, if there is already some better solution. I'm not very familiar with data-structures, so maybe there already exists some structure that works for my problem? In summary, basically what I'm asking is if anyone knows of a good algorithm, that works in Python, that can be used to check if a point lies in any rectangle in a given set of rectangles. edit: This is in 2D and the rectangles are not rotated.
0
1
13,398
0
1,916,520
0
0
0
0
2
false
9
2009-12-15T20:02:00.000
2
4
0
How do I add rows and columns to a NUMPY array?
1,909,994
0.099668
python,arrays,numpy,reshape
No matter what, you'll be stuck reallocating a chunk of memory, so it doesn't really matter if you use arr.resize(), np.concatenate, hstack/vstack, etc. Note that if you're accumulating a lot of data sequentially, Python lists are usually more efficient.
Hello I have a 1000 data series with 1500 points in each. They form a (1000x1500) size Numpy array created using np.zeros((1500, 1000)) and then filled with the data. Now what if I want the array to grow to say 1600 x 1100? Do I have to add arrays using hstack and vstack or is there a better way? I would want the data already in the 1000x1500 piece of the array not to be changed, only blank data (zeros) added to the bottom and right, basically. Thanks.
0
1
17,603
0
1,910,401
0
0
0
0
2
true
9
2009-12-15T20:02:00.000
3
4
0
How do I add rows and columns to a NUMPY array?
1,909,994
1.2
python,arrays,numpy,reshape
If you want zeroes in the added elements, my_array.resize((1600, 1000)) should work. Note that this differs from numpy.resize(my_array, (1600, 1000)), in which previous lines are duplicated, which is probably not what you want. Otherwise (for instance if you want to avoid initializing elements to zero, which could be unnecessary), you can indeed use hstack and vstack to add an array containing the new elements; numpy.concatenate() (see pydoc numpy.concatenate) should work too (it is just more general, as far as I understand). In either case, I would guess that a new memory block has to be allocated in order to extend the array, and that all these methods take about the same time.
Hello I have a 1000 data series with 1500 points in each. They form a (1000x1500) size Numpy array created using np.zeros((1500, 1000)) and then filled with the data. Now what if I want the array to grow to say 1600 x 1100? Do I have to add arrays using hstack and vstack or is there a better way? I would want the data already in the 1000x1500 piece of the array not to be changed, only blank data (zeros) added to the bottom and right, basically. Thanks.
0
1
17,603
0
1,950,575
0
1
0
0
1
false
0
2009-12-23T03:30:00.000
0
5
0
python lottery suggestion
1,950,539
0
python
The main shortcoming of software-based methods of generating lottery numbers is the fact that all random numbers generated by software are pseudo-random. This may not be a problem for your simple application, but you did ask about a 'specific mathematical philosophy'. You will have noticed that all commercial lottery systems use physical methods: balls with numbers. And behind the scenes, the numbers generated by physical lottery systems will be carefully scrutunised for indications of non-randomness and steps taken to eliminate it. As I say, this may not be a consideration for your simple application, but the overriding requirement of a true lottery (the 'specific mathematical philosophy') should be mathematically demonstrable randomness
I know python offers random module to do some simple lottery. Let say random.shuffle() is a good one. However, I want to build my own simple one. What should I look into? Is there any specific mathematical philosophies behind lottery? Let say, the simplest situation. 100 names and generate 20 names randomly. I don't want to use shuffle, since I want to learn to build one myself. I need some advise to start. Thanks.
0
1
1,639
0
1,972,198
0
0
0
0
2
false
8
2009-12-28T23:46:00.000
1
3
0
Interpolating a scalar field in a 3D space
1,972,172
0.066568
python,algorithm,interpolation
Why not try quadlinear interpolation? extend Trilinear interpolation by another dimension. As long as a linear interpolation model fits your data, it should work.
I have a 3D space (x, y, z) with an additional parameter at each point (energy), giving 4 dimensions of data in total. I would like to find a set of x, y, z points which correspond to an iso-energy surface found by interpolating between the known points. The spacial mesh has constant spacing and surrounds the iso-energy surface entirely, however, it does not occupy a cubic space (the mesh occupies a roughly cylindrical space) Speed is not crucial, I can leave this number crunching for a while. Although I'm coding in Python and NumPy, I can write portions of the code in FORTRAN. I can also wrap existing C/C++/FORTRAN libraries for use in the scripts, if such libraries exist. All examples and algorithms that I have so far found online (and in Numerical Recipes) stop short of 4D data.
0
1
5,443
0
1,973,347
0
0
0
0
2
false
8
2009-12-28T23:46:00.000
2
3
0
Interpolating a scalar field in a 3D space
1,972,172
0.132549
python,algorithm,interpolation
Since you have a spatial mesh with constant spacing, you can identify all neighbors on opposite sides of the isosurface. Choose some form of interpolation (q.v. Reed Copsey's answer) and do root-finding along the line between each such neighbor.
I have a 3D space (x, y, z) with an additional parameter at each point (energy), giving 4 dimensions of data in total. I would like to find a set of x, y, z points which correspond to an iso-energy surface found by interpolating between the known points. The spacial mesh has constant spacing and surrounds the iso-energy surface entirely, however, it does not occupy a cubic space (the mesh occupies a roughly cylindrical space) Speed is not crucial, I can leave this number crunching for a while. Although I'm coding in Python and NumPy, I can write portions of the code in FORTRAN. I can also wrap existing C/C++/FORTRAN libraries for use in the scripts, if such libraries exist. All examples and algorithms that I have so far found online (and in Numerical Recipes) stop short of 4D data.
0
1
5,443
0
10,303,500
0
1
0
0
1
false
6
2010-01-12T10:02:00.000
2
4
0
How do I plot a graph in Python?
2,048,041
0.099668
python,matplotlib
There is a very good book: Sandro Tosi, Matplotlib for Python Developers, Packt Pub., 2009.
I have installed Matplotlib, and I have created two lists, x and y. I want the x-axis to have values from 0 to 100 in steps of 10 and the y-axis to have values from 0 to 1 in steps of 0.1. How do I plot this graph?
0
1
5,478
0
2,111,067
0
0
0
1
3
false
4
2010-01-21T16:22:00.000
1
5
0
File indexing (using Binary trees?) in Python
2,110,843
0.039979
python,algorithm,indexing,binary-tree
If the data is already organized in fields, it doesn't sound like a text searching/indexing problem. It sounds like tabular data that would be well-served by a database. Script the file data into a database, index as you see fit, and query the data in any complex way the database supports. That is unless you're looking for a cool learning project. Then, by all means, come up with an interesting file indexing scheme.
Background I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.). Proposal In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this. I realize this is a little ill-formed, and solutions are welcome. Additional Details files long, not wide millions of lines per hour, spread over 100 files per hour tab seperated, not many columns (~10) fields are short (say < 50 chars per field) queries are on fields, combinations of fields, and can be historical Drawbacks to various solutions: (All of these are based on my observations and tests, but I'm open to correction) BDB has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible) single writer (if it's possible to get around this, I want to see code!) hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over). since it only stores strings, there is a serialize / deserialize step RDBMSes Wins: flat table model is excellent for querying, indexing Losses: In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.
0
1
2,801
0
2,110,912
0
0
0
1
3
false
4
2010-01-21T16:22:00.000
1
5
0
File indexing (using Binary trees?) in Python
2,110,843
0.039979
python,algorithm,indexing,binary-tree
The physical storage access time will tend to dominate anything you do. When you profile, you'll find that the read() is where you spend most of your time. To reduce the time spent waiting for I/O, your best bet is compression. Create a huge ZIP archive of all of your files. One open, fewer reads. You'll spend more CPU time. I/O time, however, will dominate your processing, so reduce I/O time by zipping everything.
Background I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.). Proposal In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this. I realize this is a little ill-formed, and solutions are welcome. Additional Details files long, not wide millions of lines per hour, spread over 100 files per hour tab seperated, not many columns (~10) fields are short (say < 50 chars per field) queries are on fields, combinations of fields, and can be historical Drawbacks to various solutions: (All of these are based on my observations and tests, but I'm open to correction) BDB has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible) single writer (if it's possible to get around this, I want to see code!) hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over). since it only stores strings, there is a serialize / deserialize step RDBMSes Wins: flat table model is excellent for querying, indexing Losses: In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.
0
1
2,801
0
12,805,622
0
0
0
1
3
false
4
2010-01-21T16:22:00.000
1
5
0
File indexing (using Binary trees?) in Python
2,110,843
0.039979
python,algorithm,indexing,binary-tree
sqlite3 is fast, small, part of python (so nothing to install) and provides indexing of columns. It writes to files, so you wouldn't need to install a database system.
Background I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.). Proposal In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this. I realize this is a little ill-formed, and solutions are welcome. Additional Details files long, not wide millions of lines per hour, spread over 100 files per hour tab seperated, not many columns (~10) fields are short (say < 50 chars per field) queries are on fields, combinations of fields, and can be historical Drawbacks to various solutions: (All of these are based on my observations and tests, but I'm open to correction) BDB has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible) single writer (if it's possible to get around this, I want to see code!) hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over). since it only stores strings, there is a serialize / deserialize step RDBMSes Wins: flat table model is excellent for querying, indexing Losses: In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.
0
1
2,801
0
2,124,356
0
1
0
0
1
false
30
2010-01-23T19:12:00.000
1
6
0
how to generate permutations of array in python?
2,124,347
0.033321
python,permutation
You may want the itertools.permutations() function. Gotta love that itertools module! NOTE: New in 2.6
i have an array of 27 elements,and i don't want to generate all permutations of array (27!) i need 5000 randomly choosed permutations,any tip will be useful...
0
1
38,205
0
2,126,656
0
0
0
0
1
false
2
2010-01-24T08:07:00.000
2
1
0
Machine learning issue for negative instances
2,126,383
0.379949
python,artificial-intelligence,machine-learning,data-mining
The question is very unclear, but assuming what you mean is that your machine learning algorithm is not working without negative examples and you can't give it every possible negative example, then it's perfectly alright to give it some negative examples. The point of data mining (a.k.a. machine learning) is to try coming up with general rules based on a relatively small samples of data and then applying them to larger data. In real life problems you will never have all the data. If you had all possible inputs, you could easily create a simple sequence of if-then rules which would always be correct. If it was that simple, robots would be doing all our thinking for us by now.
I had to build a concept analyzer for computer science field and I used for this machine learning, the orange library for Python. I have the examples of concepts, where the features are lemma and part of speech, like algorithm|NN|concept. The problem is that any other word, that in fact is not a concept, is classified as a concept, due to the lack of negative examples. It is not feasable to put all the other words in learning file, classified as simple words not concepts(this will work, but is not quite a solution). Any idea? Thanks.
0
1
296
0
2,352,019
0
0
0
0
2
false
37
2010-02-02T23:50:00.000
5
4
0
How can I detect and track people using OpenCV?
2,188,646
0.244919
python,opencv,computer-vision,motion-detection
Nick, What you are looking for is not people detection, but motion detection. If you tell us a lot more about what you are trying to solve/do, we can answer better. Anyway, there are many ways to do motion detection depending on what you are going to do with the results. Simplest one would be differencing followed by thresholding while a complex one could be proper background modeling -> foreground subtraction -> morphological ops -> connected component analysis, followed by blob analysis if required. Download the opencv code and look in samples directory. You might see what you are looking for. Also, there is an Oreilly book on OCV. Hope this helps, Nand
I have a camera that will be stationary, pointed at an indoors area. People will walk past the camera, within about 5 meters of it. Using OpenCV, I want to detect individuals walking past - my ideal return is an array of detected individuals, with bounding rectangles. I've looked at several of the built-in samples: None of the Python samples really apply The C blob tracking sample looks promising, but doesn't accept live video, which makes testing difficult. It's also the most complicated of the samples, making extracting the relevant knowledge and converting it to the Python API problematic. The C 'motempl' sample also looks promising, in that it calculates a silhouette from subsequent video frames. Presumably I could then use that to find strongly connected components and extract individual blobs and their bounding boxes - but I'm still left trying to figure out a way to identify blobs found in subsequent frames as the same blob. Is anyone able to provide guidance or samples for doing this - preferably in Python?
0
1
45,934
0
2,190,799
0
0
0
0
2
false
37
2010-02-02T23:50:00.000
2
4
0
How can I detect and track people using OpenCV?
2,188,646
0.099668
python,opencv,computer-vision,motion-detection
This is similar to a project we did as part of a Computer Vision course, and I can tell you right now that it is a hard problem to get right. You could use foreground/background segmentation, find all blobs and then decide that they are a person. The problem is that it will not work very well since people tend to go together, go past each other and so on, so a blob might very well consist of two persons and then you will see that blob splitting and merging as they walk along. You will need some method of discriminating between multiple persons in one blob. This is not a problem I expect anyone being able to answer in a single SO-post. My advice is to dive into the available research and see if you can find anything there. The problem is not unsolvavble considering that there exists products which do this: Autoliv has a product to detect pedestrians using an IR-camera on a car, and I have seen other products which deal with counting customers entering and exiting stores.
I have a camera that will be stationary, pointed at an indoors area. People will walk past the camera, within about 5 meters of it. Using OpenCV, I want to detect individuals walking past - my ideal return is an array of detected individuals, with bounding rectangles. I've looked at several of the built-in samples: None of the Python samples really apply The C blob tracking sample looks promising, but doesn't accept live video, which makes testing difficult. It's also the most complicated of the samples, making extracting the relevant knowledge and converting it to the Python API problematic. The C 'motempl' sample also looks promising, in that it calculates a silhouette from subsequent video frames. Presumably I could then use that to find strongly connected components and extract individual blobs and their bounding boxes - but I'm still left trying to figure out a way to identify blobs found in subsequent frames as the same blob. Is anyone able to provide guidance or samples for doing this - preferably in Python?
0
1
45,934
0
2,405,587
0
0
0
0
1
true
6
2010-02-03T21:07:00.000
3
4
0
OpenCV 2.0 and Python
2,195,441
1.2
python,opencv
After Step 1 (Installer) just copy the content of C:\OpenCV2.0\Python2.6\Lib\site-packages to C:\Python26\Lib\site-packages (standard installation path assumed). That's all. If you have a webcam installed you can try the camshift.demo in C:\OpenCV2.0\samples\python The deprecated stuff (C:\OpenCV2.0\samples\swig_python) does not work at the moment as somebody wrote above. The OpenCV People are working on it. Here is the full picture: 31/03/10 (hopefully) Next OpenCV Official Release: 2.1.0 is due March 31st, 2010. link://opencv.willowgarage.com/wiki/Welcome/Introduction#Announcements 04/03/10 [james]rewriting samples for new Python 5:36 PM Mar 4th via API link://twitter.com/opencvlibrary 12/31/09 We've gotten more serious about OpenCV's software engineering. We now have a full C++ and Python interface. link://opencv.willowgarage.com/wiki/OpenCV%20Monthly 9/30/09 Several (actually, most) SWIG-based Python samples do not work correctly now. The reason is this problem is being investigated and the intermediate update of the OpenCV Python package will be released as soon as the problem is sorted out. link://opencv.willowgarage.com/wiki/OpenCV%20Monthly
I cannot get the example Python programs to run. When executing the Python command "from opencv import cv" I get the message "ImportError: No module named _cv". There is a stale _cv.pyd in the site-packages directory, but no _cv.py anywhere. See step 5 below. MS Windows XP, VC++ 2008, Python 2.6, OpenCV 2.0 Here's what I have done. Downloaded and ran the MS Windows installer for OpenCV2.0. Downloaded and installed CMake Downloaded and installed SWIG Ran CMake. After unchecking "ENABLE_OPENMP" in the CMake GUI, I was able to build OpenCV using INSTALL.vcproj and BUILD_ALL.vcproj. I do not know what the difference is, so I built everything under both of those project files. The C example programs run fine. Copied contents of OpenCV2.0/Python2.6/lib/site-packages to my installed Python2.6/lib/site-packages directory. I notice that it contains an old _cv.pyd and an old libcv.dll.a.
0
1
16,728
0
2,204,122
0
0
0
0
2
false
2
2010-02-04T23:45:00.000
1
3
1
Minimal linear regression program
2,204,087
0.066568
c++,python,bash,linear-algebra
How about extracting the coeffs into a file, import to another machine and then use Excel/Matlab/whatever other program that does this for you?
I am running some calculations in an external machine and at the end I get X, Y pairs. I want to apply linear regression and obtain A, B, and R2. In this machine I can not install anything (it runs Linux) and has basic stuff installed on it, python, bash (of course), etc. I wonder what would be the best approach to use a script (python, bash, etc) or program (I can compile C and C++) that gives me the linear regression coefficients without the need to add external libraries (numpy, etc)
0
1
2,428
0
2,204,124
0
0
0
0
2
false
2
2010-02-04T23:45:00.000
3
3
1
Minimal linear regression program
2,204,087
0.197375
c++,python,bash,linear-algebra
For a single, simple, known function (as in your case: a line) it is not hard to simply code a basic least square routine from scratch (but does require some attention to detail). It is a very common assignment in introductory numeric analysis classes. So, look up least squares on wikipedia or mathworld or in a text book and go to town.
I am running some calculations in an external machine and at the end I get X, Y pairs. I want to apply linear regression and obtain A, B, and R2. In this machine I can not install anything (it runs Linux) and has basic stuff installed on it, python, bash (of course), etc. I wonder what would be the best approach to use a script (python, bash, etc) or program (I can compile C and C++) that gives me the linear regression coefficients without the need to add external libraries (numpy, etc)
0
1
2,428
0
2,223,165
0
0
0
0
1
true
5
2010-02-08T04:01:00.000
1
4
0
Matplotlib turn off antialias for text in plot?
2,219,503
1.2
python,matplotlib,antialiasing
It seems this is not possible. Some classes such as Line2D have a "set_antialiased" method, but Text lacks this. I suggest you file a feature request on the Sourceforge tracker, and send an email to the matplotlib mailing list mentioning the request.
Is there any way to turn off antialias for all text in a plot, especially the ticklabels?
0
1
2,659
0
2,224,074
0
0
0
0
1
false
1
2010-02-08T17:38:00.000
0
3
0
Design pattern for ongoing survey anayisis
2,223,576
0
python,design-patterns,statistics,matrix,survey
On the analysis, if your six questions have been posed in a way that would lead you to believe the answers will be correlated, consider conducting a factor analysis on the raw scores first. Often comparing the factors across regions or customer type has more statistical power than comparing across questions alone. Also, the factor scores are more likely to be normally distributed (they are the weighted sum of 6 observations) while the six questions alone would not. This allows you to apply t-tests based on the normal distibution when comparing factor scores. One watchout, though. If you assign numeric values to answers - 1 = much worse, 2 = worse, etc. you are implying that the distance between much worse and worse is the same as the distance between worse and same. This is generally not true - you might really have to screw up to get a vote of "much worse" while just being a passive screw up might get you a "worse" score. So the assignment of cardinal (numerics) to ordinal (ordering) has a bias of its own. The unequal number of participants per quarter isn't a problem - there are statistical t-tests that deal with unequal sample sizes.
I'm doing an ongoing survey, every quarter. We get people to sign up (where they give extensive demographic info). Then we get them to answer six short questions with 5 possible values much worse, worse, same, better, much better. Of course over time we will not get the same participants,, some will drop out and some new ones will sign up,, so I'm trying to decide how to best build a db and code (hope to use Python, Numpy?) to best allow for ongoing collection and analysis by the various categories defined by the initial demographic data..As of now we have 700 or so participants, so the dataset is not too big. I.E.; demographic, UID, North, south, residential. commercial Then answer for 6 questions for Q1 Same for Q2 and so on,, then need able to slice dice and average the values for the quarterly answers by the various demographics to see trends over time. The averaging, grouping and so forth is modestly complicated by having differing participants each quarter Any pointers to design patterns for this sort of DB? and analysis? Is this a sparse matrix?
0
1
1,157
0
2,247,284
0
1
0
0
1
true
6
2010-02-11T19:39:00.000
2
3
0
Associative Matrices?
2,247,197
1.2
python,data-structures,matrix,d,associative-array
Why not just use a standard matrix, but then have two dictionaries - one that converts the row keys to row indices and one that converts the columns keys to columns indices. You could make your own structure that would work this way fairly easily I think. You just make a class that contains the matrix and the two dictionaries and go from there.
I'm working on a project where I need to store a matrix of numbers indexed by two string keys. The matrix is not jagged, i.e. if a column key exists for any row then it should exist for all rows. Similarly, if a row key exists for any column then it should exist for all columns. The obvious way to express this is with an associative array of associative arrays, but this is both awkward and inefficient, and it doesn't enforce the non-jaggedness property. Do any popular programming languages provide an associative matrix either built into the language or as part of their standard libraries? If so, how do they work, both at the API and implementation level? I'm using Python and D for this project, but examples in other languages would still be useful because I would be able to look at the API and figure out the best way to implement something similar in Python or D.
0
1
566
0
3,279,041
0
1
0
0
2
false
4
2010-02-19T20:51:00.000
0
6
0
How do quickly search through a .csv file in Python
2,299,454
0
python,dictionary,csv,large-files
my idea is to use python zodb module to store dictionaty type data and then create new csv file using that data structure. do all your operation at that time.
I'm reading a 6 million entry .csv file with Python, and I want to be able to search through this file for a particular entry. Are there any tricks to search the entire file? Should you read the whole thing into a dictionary or should you perform a search every time? I tried loading it into a dictionary but that took ages so I'm currently searching through the whole file every time which seems wasteful. Could I possibly utilize that the list is alphabetically ordered? (e.g. if the search word starts with "b" I only search from the line that includes the first word beginning with "b" to the line that includes the last word beginning with "b") I'm using import csv. (a side question: it is possible to make csv go to a specific line in the file? I want to make the program start at a random line) Edit: I already have a copy of the list as an .sql file as well, how could I implement that into Python?
0
1
20,439
0
2,443,606
0
1
0
0
2
false
4
2010-02-19T20:51:00.000
1
6
0
How do quickly search through a .csv file in Python
2,299,454
0.033321
python,dictionary,csv,large-files
You can't go directly to a specific line in the file because lines are variable-length, so the only way to know when line #n starts is to search for the first n newlines. And it's not enough to just look for '\n' characters because CSV allows newlines in table cells, so you really do have to parse the file anyway.
I'm reading a 6 million entry .csv file with Python, and I want to be able to search through this file for a particular entry. Are there any tricks to search the entire file? Should you read the whole thing into a dictionary or should you perform a search every time? I tried loading it into a dictionary but that took ages so I'm currently searching through the whole file every time which seems wasteful. Could I possibly utilize that the list is alphabetically ordered? (e.g. if the search word starts with "b" I only search from the line that includes the first word beginning with "b" to the line that includes the last word beginning with "b") I'm using import csv. (a side question: it is possible to make csv go to a specific line in the file? I want to make the program start at a random line) Edit: I already have a copy of the list as an .sql file as well, how could I implement that into Python?
0
1
20,439
0
2,352,875
0
1
0
0
1
false
4
2010-02-28T20:18:00.000
3
5
0
Calculating the area underneath a mathematical function
2,352,499
0.119427
python,polynomial-math,numerical-integration
It might be overkill to resort to general-purpose numeric integration algorithms for your special case...if you work out the algebra, there's a simple expression that gives you the area. You have a polynomial of degree 2: f(x) = ax2 + bx + c You want to find the area under the curve for x in the range [0,1]. The antiderivative F(x) = ax3/3 + bx2/2 + cx + C The area under the curve from 0 to 1 is: F(1) - F(0) = a/3 + b/2 + c So if you're only calculating the area for the interval [0,1], you might consider using this simple expression rather than resorting to the general-purpose methods.
I have a range of data that I have approximated using a polynomial of degree 2 in Python. I want to calculate the area underneath this polynomial between 0 and 1. Is there a calculus, or similar package from numpy that I can use, or should I just make a simple function to integrate these functions? I'm a little unclear what the best approach for defining mathematical functions is. Thanks.
0
1
4,045
0
2,361,204
0
0
0
0
1
true
1
2010-03-02T05:57:00.000
1
1
0
Solving Sparse Linear Problem With Some Known Boundary Values
2,361,176
1.2
python,numpy,sparse-matrix,poisson
If I understand correctly, some elements of x are known, and some are not, and you want to solve Ax = b for the unknown values of x, correct? Let Ax = [A1 A2][x1; x2] = b, where the vector x = [x1; x2], the vector x1 has the unknown values of x, and vector x2 have the known values of x. Then, A1x1 = b - A2x2. Therefore, solve for x1 using scipy.linalg.solve or any other desired solver.
I'm trying to solve a Poisson equation on a rectangular domain which ends up being a linear problem like Ax=b but since I know the boundary conditions, there are nodes where I have the solution values. I guess my question is... How can I solve the sparse system Ax=b if I know what some of the coordinates of x are and the undetermined values depend on these as well? It's the same as a normal solve except I know some of the solution to begin with. Thanks!
0
1
533
0
2,371,227
0
0
0
0
1
false
222
2010-03-03T07:42:00.000
2
12
0
Generate a heatmap in MatPlotLib using a scatter data set
2,369,492
0.033321
python,matplotlib,heatmap,histogram2d
Make a 2-dimensional array that corresponds to the cells in your final image, called say heatmap_cells and instantiate it as all zeroes. Choose two scaling factors that define the difference between each array element in real units, for each dimension, say x_scale and y_scale. Choose these such that all your datapoints will fall within the bounds of the heatmap array. For each raw datapoint with x_value and y_value: heatmap_cells[floor(x_value/x_scale),floor(y_value/y_scale)]+=1
I have a set of X,Y data points (about 10k) that are easy to plot as a scatter plot but that I would like to represent as a heatmap. I looked through the examples in MatPlotLib and they all seem to already start with heatmap cell values to generate the image. Is there a method that converts a bunch of x,y, all different, to a heatmap (where zones with higher frequency of x,y would be "warmer")?
0
1
284,427
0
2,392,026
0
0
0
1
1
true
8
2010-03-06T09:30:00.000
15
2
0
SQLite or flat text file?
2,392,017
1.2
python,sql,database,r,file-format
If all the languages support SQLite - use it. The power of SQL might not be useful to you right now, but it probably will be at some point, and it saves you having to rewrite things later when you decide you want to be able to query your data in more complicated ways. SQLite will also probably be substantially faster if you only want to access certain bits of data in your datastore - since doing that with a flat-text file is challenging without reading the whole file in (though it's not impossible).
I process a lot of text/data that I exchange between Python, R, and sometimes Matlab. My go-to is the flat text file, but also use SQLite occasionally to store the data and access from each program (not Matlab yet though). I don't use GROUPBY, AVG, etc. in SQL as much as I do these operations in R, so I don't necessarily require the database operations. For such applications that requires exchanging data among programs to make use of available libraries in each language, is there a good rule of thumb on which data exchange format/method to use (even XML or NetCDF or HDF5)? I know between Python -> R there is rpy or rpy2 but I was wondering about this question in a more general sense - I use many computers which all don't have rpy2 and also use a few other pieces of scientific analysis software that require access to the data at various times (the stages of processing and analysis are also separated).
0
1
3,563
0
2,433,626
0
0
1
0
2
true
1
2010-03-12T12:54:00.000
5
2
0
OpenCV performance in different languages
2,432,792
1.2
c++,python,c,performance,opencv
You've answered your own question pretty well. Most of the expensive computations should be within the OpenCV library, and thus independent of the language you use. If you're really concerned about efficiency, you could profile your code and confirm that this is indeed the case. If need be, your custom processing functions, if any, could be coded in C/C++ and exposed in python through the method of your choice (eg: boost-python), to follow the same approach. But in my experience, python works just fine as a "composition" tool for such a use.
I'm doing some prototyping with OpenCV for a hobby project involving processing of real time camera data. I wonder if it is worth the effort to reimplement this in C or C++ when I have it all figured out or if no significant performance boost can be expected. The program basically chains OpenCV functions, so the main part of the work should be done in native code anyway.
0
1
1,352
0
2,470,491
0
0
1
0
2
false
1
2010-03-12T12:54:00.000
0
2
0
OpenCV performance in different languages
2,432,792
0
c++,python,c,performance,opencv
OpenCV used to utilize IPP, which is very fast. However, OpenCV 2.0 does not. You might customize your OpenCV using IPP, for example color conversion routines.
I'm doing some prototyping with OpenCV for a hobby project involving processing of real time camera data. I wonder if it is worth the effort to reimplement this in C or C++ when I have it all figured out or if no significant performance boost can be expected. The program basically chains OpenCV functions, so the main part of the work should be done in native code anyway.
0
1
1,352
0
2,460,285
0
0
0
0
1
false
9
2010-03-17T03:29:00.000
1
4
0
Python KMeans clustering words
2,459,739
0.049958
python,cluster-analysis
Yeah I think there isn't a good implementation to what I need. I have some crazy requirements, like distance caching etc. So i think i will just write my own lib and release it as GPLv3 soon.
I am interested to perform kmeans clustering on a list of words with the distance measure being Leveshtein. 1) I know there are a lot of frameworks out there, including scipy and orange that has a kmeans implementation. However they all require some sort of vector as the data which doesn't really fit me. 2) I need a good clustering implementation. I looked at python-clustering and realize that it doesn't a) return the sum of all the distance to each centroid, and b) it doesn't have any sort of iteration limit or cut off which ensures the quality of the clustering. python-clustering and the clustering algorithm on daniweb doesn't really work for me. Can someone find me a good lib? Google hasn't been my friend
0
1
1,947
0
2,469,142
0
0
0
0
1
false
4
2010-03-18T10:31:00.000
7
2
0
Open-source implementation of Mersenne Twister in Python?
2,469,031
1
python,random,open-source,mersenne-twister
Mersenne Twister is an implementation that is used by standard python library. You can see it in random.py file in your python distribution. On my system (Ubuntu 9.10) it is in /usr/lib/python2.6, on Windows it should be in C:\Python26\Lib
Is there any good open-source implementation of Mersenne Twister and other good random number generators in Python available? I would like to use in for teaching math and comp sci majors? I am also looking for the corresponding theoretical support. Edit: Source code of Mersenne Twister is readily available in various languages such as C (random.py) or pseudocode (Wikipedia) but I could not find one in Python.
0
1
5,857
0
2,489,898
0
1
0
0
1
false
1
2010-03-21T20:55:00.000
1
2
0
python parallel computing: split keyspace to give each node a range to work on
2,488,670
0.099668
python,algorithm,cluster-analysis,character-encoding
You should be able to treat your words as numerals in a strange base. For example, let's say you have a..z as your charset (26 characters), 4 character strings, and you want to distribute among equally 10 machines. Then there are a total of 26^4 strings, so each machine gets 26^4/10 strings. The first machine will get strings 0 through 26^4/10, the next 26^4/10 through 26^4/5, etc. To convert the numbers to strings, just write the number in base 26 using your charset as the numbers. So 0 is 'aaaa' and 26^4/10 = 2*26^3 + 15*26^2 + 15*26 +15 is 'cppp'.
My question is rather complicated for me to explain, as i'm not really good at maths, but i'll try to be as clear as possible. I'm trying to code a cluster in python, which will generate words given a charset (i.e. with lowercase: aaaa, aaab, aaac, ..., zzzz) and make various operations on them. I'm searching how to calculate, given the charset and the number of nodes, what range each node should work on (i.e.: node1: aaaa-azzz, node2: baaa-czzz, node3: daaa-ezzz, ...). Is it possible to make an algorithm that could compute this, and if it is, how could i implement this in python? I really don't know how to do that, so any help would be much appreciated
0
1
260
0
2,497,667
0
1
0
0
1
false
11
2010-03-23T03:47:00.000
4
5
0
Plot string values in matplotlib
2,497,449
0.158649
python,matplotlib
Why not just make the x value some auto-incrementing number and then change the label? --jed
I am using matplotlib for a graphing application. I am trying to create a graph which has strings as the X values. However, the using plot function expects a numeric value for X. How can I use string X values?
0
1
48,768
0
5,876,058
0
0
0
0
1
false
96
2010-03-25T00:24:00.000
78
22
0
how to merge 200 csv files in Python
2,512,386
1
python,csv,merge,concatenation
Why can't you just sed 1d sh*.csv > merged.csv? Sometimes you don't even have to use python!
Guys, I here have 200 separate csv files named from SH (1) to SH (200). I want to merge them into a single csv file. How can I do it?
0
1
189,456
1
3,120,753
0
0
0
0
1
false
8
2010-03-25T07:45:00.000
1
1
0
How to copy matplotlib figure?
2,513,786
0.197375
python,wxpython,copy,matplotlib
I'm not familiar with the inner workings, but could easily imagine how disposing of a frame damages the figure data. Is it expensive to draw? Otherwise I'd take the somewhat chickenish approach of simply redrawing it ;)
I have FigureCanvasWxAgg instance with a figure displayed on a frame. If user clicks on the canvas another frame with a new FigureCanvasWxAgg containing the same figure will be shown. By now closing the new frame can result in destroying the C++ part of the figure so that it won't be available for the first frame. How can I save the figure? Python deepcopy from copy module does't work in this case. Thanks in advance.
0
1
1,759
0
2,521,903
0
0
0
0
1
false
4
2010-03-25T19:29:00.000
2
5
0
A 3-D grid of regularly spaced points
2,518,730
0.07983
python,numpy
I would say go with meshgrid or mgrid, in particular if you need non-integer coordinates. I'm surprised that Numpy's broadcasting rules would be more efficient, as meshgrid was designed especially for the problem that you want to solve.
I want to create a list containing the 3-D coords of a grid of regularly spaced points, each as a 3-element tuple. I'm looking for advice on the most efficient way to do this. In C++ for instance, I simply loop over three nested loops, one for each coordinate. In Matlab, I would probably use the meshgrid function (which would do it in one command). I've read about meshgrid and mgrid in Python, and I've also read that using numpy's broadcasting rules is more efficient. It seems to me that using the zip function in combination with the numpy broadcast rules might be the most efficient way, but zip doesn't seem to be overloaded in numpy.
0
1
1,877
0
2,528,683
0
0
0
0
1
true
4
2010-03-26T22:28:00.000
6
2
1
Is Using Python to MapReduce for Cassandra Dumb?
2,527,173
1.2
python,mongodb,cassandra,couchdb,nosql
Cassandra supports map reduce since version 0.6. (Current stable release is 0.5.1, but go ahead and try the new map reduce functionality in 0.6.0-beta3) To get started I recommend to take a look at the word count map reduce example in 'contrib/word_count'.
Since Cassandra doesn't have MapReduce built in yet (I think it's coming in 0.7), is it dumb to try and MapReduce with my Python client or should I just use CouchDB or Mongo or something? The application is stats collection, so I need to be able to sum values with grouping to increment counters. I'm not, but pretend I'm making Google analytics so I want to keep track of which browsers appear, which pages they went to, and visits vs. pageviews. I would just atomically update my counters on write, but Cassandra isn't very good at counters either. May Cassandra just isn't the right choice for this? Thanks!
0
1
1,625
0
2,546,266
0
0
0
0
1
false
4
2010-03-30T14:35:00.000
3
4
0
How should I use random.jumpahead in Python
2,546,039
0.148885
python,random
jumpahead(1) is indeed sufficient (and identical to jumpahead(50000) or any other such call, in the current implementation of random -- I believe that came in at the same time as the Mersenne Twister based implementation). So use whatever argument fits in well with your programs' logic. (Do use a separate random.Random instance per thread for thread-safety purposes of course, as your question already hints). (random module generated numbers are not meant to be cryptographically strong, so it's a good thing that you're not using for security purposes;-).
I have a application that does a certain experiment 1000 times (multi-threaded, so that multiple experiments are done at the same time). Every experiment needs appr. 50.000 random.random() calls. What is the best approach to get this really random. I could copy a random object to every experiment and do than a jumpahead of 50.000 * expid. The documentation suggests that jumpahead(1) already scrambles the state, but is that really true? Or is there another way to do this in 'the best way'? (No, the random numbers are not used for security, but for a metropolis hasting algorithm. The only requirement is that the experiments are independent, not whether the random sequence is somehow predictable or so)
0
1
2,828
0
2,569,226
0
1
0
0
2
false
4
2010-04-01T18:31:00.000
0
4
0
MATLAB-like variable editor in Python
2,562,697
0
python,numpy,ipython
in ipython, ipipe.igrid() can be used to view tabular data.
Is there a data viewer in Python/IPython like the variable editor in MATLAB?
0
1
2,415
0
28,463,696
0
1
0
0
2
false
4
2010-04-01T18:31:00.000
0
4
0
MATLAB-like variable editor in Python
2,562,697
0
python,numpy,ipython
Even Pycharm will be a good option if you are looking for MATLAB like editor.
Is there a data viewer in Python/IPython like the variable editor in MATLAB?
0
1
2,415
0
2,564,787
0
0
1
0
2
false
10
2010-04-01T21:17:00.000
3
2
0
Better use a tuple or numpy array for storing coordinates
2,563,773
0.291313
python,arrays,numpy,tuples,complex-numbers
A numpy array with an extra dimension is tighter in memory use, and at least as fast!, as a numpy array of tuples; complex numbers are at least as good or even better, including for your third question. BTW, you may have noticed that -- while questions asked later than yours were getting answers aplenty -- your was laying fallow: part of the reason is no doubt that asking three questions within a question turns responders off. Why not just ask one question per question? It's not as if you get charged for questions or anything, you know...!-)
I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind: 1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy array, both in memory and access time wise? 2) In some cases, these coordinates will be used to build a complex number, evaluated on a complex function, and the real part of this function will be used. Assuming that there is no way to separate real and complex parts of this function, and the real part will have to be used on the end, maybe is better to use directly complex numbers to store (x,y)? How bad is the overhead with the transformation from complex to real in python? The code in c++ does a lot of these transformations, and this is a big slowdown in that code. 3) Also some coordinates transformations will have to be performed, and for the coordinates the x and y values will be accessed in separate, the transformation be done, and the result returned. The coordinate transformations are defined in the complex plane, so is still faster to use the components x and y directly than relying on the complex variables? Thank you
0
1
6,196
0
2,564,868
0
0
1
0
2
true
10
2010-04-01T21:17:00.000
7
2
0
Better use a tuple or numpy array for storing coordinates
2,563,773
1.2
python,arrays,numpy,tuples,complex-numbers
In terms of memory consumption, numpy arrays are more compact than Python tuples. A numpy array uses a single contiguous block of memory. All elements of the numpy array must be of a declared type (e.g. 32-bit or 64-bit float.) A Python tuple does not necessarily use a contiguous block of memory, and the elements of the tuple can be arbitrary Python objects, which generally consume more memory than numpy numeric types. So this issue is a hands-down win for numpy, (assuming the elements of the array can be stored as a numpy numeric type). On the issue of speed, I think the choice boils down to the question, "Can you vectorize your code?" That is, can you express your calculations as operations done on entire arrays element-wise. If the code can be vectorized, then numpy will most likely be faster than Python tuples. (The only case I could imagine where it might not be, is if you had many very small tuples. In this case the overhead of forming the numpy arrays and one-time cost of importing numpy might drown-out the benefit of vectorization.) An example of code that could not be vectorized would be if your calculation involved looking at, say, the first complex number in an array z, doing a calculation which produces an integer index idx, then retrieving z[idx], doing a calculation on that number, which produces the next index idx2, then retrieving z[idx2], etc. This type of calculation might not be vectorizable. In this case, you might as well use Python tuples, since you won't be able to leverage numpy's strength. I wouldn't worry about the speed of accessing the real/imaginary parts of a complex number. My guess is the issue of vectorization will most likely determine which method is faster. (Though, by the way, numpy can transform an array of complex numbers to their real parts simply by striding over the complex array, skipping every other float, and viewing the result as floats. Moreover, the syntax is dead simple: If z is a complex numpy array, then z.real is the real parts as a float numpy array. This should be far faster than the pure Python approach of using a list comprehension of attribute lookups: [z.real for z in zlist].) Just out of curiosity, what is your reason for porting the C++ code to Python?
I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind: 1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy array, both in memory and access time wise? 2) In some cases, these coordinates will be used to build a complex number, evaluated on a complex function, and the real part of this function will be used. Assuming that there is no way to separate real and complex parts of this function, and the real part will have to be used on the end, maybe is better to use directly complex numbers to store (x,y)? How bad is the overhead with the transformation from complex to real in python? The code in c++ does a lot of these transformations, and this is a big slowdown in that code. 3) Also some coordinates transformations will have to be performed, and for the coordinates the x and y values will be accessed in separate, the transformation be done, and the result returned. The coordinate transformations are defined in the complex plane, so is still faster to use the components x and y directly than relying on the complex variables? Thank you
0
1
6,196
0
2,569,232
0
0
0
0
1
false
3
2010-04-02T19:23:00.000
0
7
0
Does Python/Scipy have a firls( ) replacement (i.e. a weighted, least squares, FIR filter design)?
2,568,707
0
python,algorithm,math,matlab,digital-filter
It seems unlikely that you'll find exactly what you seek already written in Python, but perhaps the Matlab function's help page gives or references a description of the algorithm?
I am porting code from Matlab to Python and am having trouble finding a replacement for the firls( ) routine. It is used for, least-squares linear-phase Finite Impulse Response (FIR) filter design. I looked at scipy.signal and nothing there looked like it would do the trick. Of course I was able to replace my remez and freqz algorithsm, so that's good. On one blog I found an algorithm that implemented this filter without weighting, but I need one with weights. Thanks, David
0
1
4,383
0
2,661,971
0
0
0
0
1
true
12
2010-04-04T00:25:00.000
20
3
0
What is the best interface from Python 3.1.1 to R?
2,573,132
1.2
python,r,interface,python-3.x
edit: Rewrite to summarize the edits that accumulated over time. The current rpy2 release (2.3.x series) has full support for Python 3.3, while no claim is made about Python 3.0, 3.1, or 3.2. At the time of writing the next rpy2 release (under development, 2.4.x series) is only supporting Python 3.3. History of Python 3 support: rpy2-2.1.0-dev / Python 3 branch in the repository - experimental support and application for a Google Summer of Code project consisting in porting rpy2 to Python 3 (under the Python umbrella) application was accepted and thanks to Google's funding support for Python 3 slowly got into the main codebase (there was a fair bit of work still to be done after the GSoC - it made it for branch version_2.2.x).
I am using Python 3.1.1 on Mac OS X 10.6.2 and need an interface to R. When browsing the internet I found out about RPy. Is this the right choice? Currently, a program in Python computes a distance matrix and, stores it in a file. I invoke R separately in an interactive way and read in the matrix for cluster analysis. In order to simplify computation one could prepare a script file for R then call it from Python and read back the results. Since I am new to Python, I would not like to go back to 2.6.
0
1
4,111
0
2,590,381
0
1
0
0
1
false
5
2010-04-07T06:05:00.000
0
2
0
Histogram in Matplotlib with input file
2,590,328
0
python,matplotlib,histogram
You can't directly tell matplotlib to make a histogram from an input file - you'll need to open the file yourself and get the data from it. How you'd do that depends on the format of the file - if it's just a file with a number on each line, you can just go through each line, strip() spaces and newlines, and use float() to convert it to a number.
I wish to make a Histogram in Matplotlib from an input file containing the raw data (.txt). I am facing issues in referring to the input file. I guess it should be a rather small program. Any Matplotlib gurus, any help ? I am not asking for the code, some inputs should put me on the right way !
0
1
14,040
0
2,597,749
0
1
0
0
1
false
9
2010-04-08T03:41:00.000
4
2
0
random.randint(1,n) in Python
2,597,444
0.379949
python,random
No doubt you have a bounded amount of memory, and address space, on your machine; for example, for a good 64-bit machine, 64 GB of RAM [[about 2**36 bytes]] and a couple of TB of disk (usable as swap space for virtual memory) [[about 2**41 bytes]]. So, the "upper bound" of a Python long integer will be the largest one representable in the available memory -- a bit less than 256**(2**40) if you are in absolutely no hurry and can swap like crazy, a bit more than 256**(2*36) (with just a little swapping but not too much) in practical terms. Unfortunately it would take quite a bit of time and space to represent these ridiculously humongous numbers in decimal, so, instead of showing them, let me check back with you -- why would you even care about such a ridiculous succession of digits as to constitute the "upper bound" you're inquiring about? I think it's more practical to put it this way: especially on a 64-bit machine with decent amounts of RAM and disk, upper bounds of long integers are way bigger than anything you'll ever compute. Technically, a mathematician would insist, they're not infinity, of course... but practically, they might as well be!-)
Most of us know that the command random.randint(1,n) in Python (2.X.X) would generate a number in random (pseudo-random) between 1 and n. I am interested in knowing what is the upper limit for n ?
0
1
19,740
0
2,621,082
0
1
0
0
2
false
4
2010-04-12T09:44:00.000
11
2
0
random() in python
2,621,055
1
python,random
The [ indicates that 0.0 is included in the range of valid outputs. The ) indicates 1.0 is not in the range of valid outputs.
In python the function random() generates a random float uniformly in the semi-open range [0.0, 1.0). In principle can it ever generate 0.0 (i.e. zero) and 1.0 (i.e. unity)? What is the scenario in practicality?
0
1
1,409
0
2,621,096
0
1
0
0
2
true
4
2010-04-12T09:44:00.000
13
2
0
random() in python
2,621,055
1.2
python,random
0.0 can be generated; 1.0 cannot (since it isn't within the range, hence the ) as opposed to [). The probability of generating 0.0 is equal to the probability of generating any other number within that range, namely, 1/X where X is the number of different possible results. For a standard unsigned double-precision floating point, this usually means 53 bits of fractional component, for 2^53 possible combinations, leading to a 1/(2^53) chance of generating exactly 0.0. So while it's possible for it to return exactly 0.0, it's unlikely that you'll see it any time soon - but it's just as unlikely that you'd see exactly any other particular value you might choose in advance.
In python the function random() generates a random float uniformly in the semi-open range [0.0, 1.0). In principle can it ever generate 0.0 (i.e. zero) and 1.0 (i.e. unity)? What is the scenario in practicality?
0
1
1,409
0
2,661,789
0
0
0
0
1
false
50
2010-04-18T09:39:00.000
2
5
0
tag generation from a text content
2,661,778
0.07983
python,tags,machine-learning,nlp,nltk
A very simple solution to the problem would be: count the occurences of each word in the text consider the most frequent terms as the key phrases have a black-list of 'stop words' to remove common words like the, and, it, is etc I'm sure there are cleverer, stats based solutions though. If you need a solution to use in a larger project rather than for interests sake, Yahoo BOSS has a key term extraction method.
I am curious if there is an algorithm/method exists to generate keywords/tags from a given text, by using some weight calculations, occurrence ratio or other tools. Additionally, I will be grateful if you point any Python based solution / library for this. Thanks
0
1
28,812
0
2,669,456
0
0
0
0
1
true
4
2010-04-19T12:58:00.000
1
1
0
Statistical analysis on large data set to be published on the web
2,667,537
1.2
php,python,postgresql,statistics
I think you can utilize your current combination(python/numpy/matplotlib) fully if the number of users are not too big. I do some similar works, and my data size a little more than 10g. Data are stored in a few sqlite files, and i use numpy to analyze data, PIL/matplotlib to generate chart files(png, gif), cherrypy as a webserver, mako as a template language. If you need more server/client database, then you can migrate to postgresql, but you can still fully use your current programs if you go with a python web framework, like cherrypy.
I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks
0
1
1,770
0
2,726,598
0
1
1
0
2
false
3
2010-04-27T18:13:00.000
1
4
0
List of objects or parallel arrays of properties?
2,723,790
0.049958
python,performance,data-structures,numpy
I think it depends on what you're going to be doing with them, and how often you're going to be working with (all attributes of one particle) vs (one attribute of all particles). The former is better suited to the object approach; the latter is better suited to the array approach. I was facing a similar problem (although in a different domain) a couple of years ago. The project got deprioritized before I actually implemented this phase, but I was leaning towards a hybrid approach, where in addition to the Ball class I would have an Ensemble class. The Ensemble would not be a list or other simple container of Balls, but would have its own attributes (which would be arrays) and its own methods. Whether the Ensemble is created from the Balls, or the Balls from the Ensemble, depends on how you're going to construct them. One of my coworkers was arguing for a solution where the fundamental object was an Ensemble which might contain only one Ball, so that no calling code would ever have to know whether you were operating on just one Ball (do you ever do that for your application?) or on many.
The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension? Update: I never expected any performance gain from parallel arrays per se, but in a mixed environment like Python + Numpy (or whatever SlowScriptingLanguage + FastNativeLibrary) using them may (or may not?) let you move more work out of you slow scripting code and into the fast native library.
0
1
1,102
0
2,723,845
0
1
1
0
2
false
3
2010-04-27T18:13:00.000
2
4
0
List of objects or parallel arrays of properties?
2,723,790
0.099668
python,performance,data-structures,numpy
Having an object for each ball in this example is certainly better design. Parallel arrays are really a workaround for languages that do not support proper objects. I wouldn't use them in a language with OO capabilities unless it's a tiny case that fits within a function (and maybe not even then) or if I've run out of every other optimization option and the profiler shows that property access is the culprit. This applies twice as much to Python as to C++, as the former places a large emphasis on readability and elegance.
The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension? Update: I never expected any performance gain from parallel arrays per se, but in a mixed environment like Python + Numpy (or whatever SlowScriptingLanguage + FastNativeLibrary) using them may (or may not?) let you move more work out of you slow scripting code and into the fast native library.
0
1
1,102
0
2,744,657
0
1
0
0
3
false
2
2010-04-30T12:41:00.000
3
4
0
Python: Plot some data (matplotlib) without GIL
2,744,530
0.148885
python,matplotlib,parallel-processing,gil
I think you'll need to put the graph into a proper Windowing system, rather than relying on the built-in show code. Maybe sticking the .show() in another thread would be sufficient? The GIL is irrelevant - you've got a blocking show() call, so you need to handle that first.
my problem is the GIL of course. While I'm analysing data it would be nice to present some plots in between (so it's not too boring waiting for results) But the GIL prevents this (and this is bringing me to the point of asking myself if Python was such a good idea in the first place). I can only display the plot, wait till the user closes it and commence calculations after that. A waste of time obviously. I already tried the subprocess and multiprocessing modules but can't seem to get them to work. Any thoughts on this one? Thanks Edit: Ok so it's not the GIL but show().
0
1
737
0
2,744,604
0
1
0
0
3
false
2
2010-04-30T12:41:00.000
3
4
0
Python: Plot some data (matplotlib) without GIL
2,744,530
0.148885
python,matplotlib,parallel-processing,gil
This has nothing to do with the GIL, just modify your analysis code to make it update the graph from time to time (for example every N iterations). Only then if you see that drawing the graph slows the analysis code too much, put the graph update code in a subprocess with multiprocessing.
my problem is the GIL of course. While I'm analysing data it would be nice to present some plots in between (so it's not too boring waiting for results) But the GIL prevents this (and this is bringing me to the point of asking myself if Python was such a good idea in the first place). I can only display the plot, wait till the user closes it and commence calculations after that. A waste of time obviously. I already tried the subprocess and multiprocessing modules but can't seem to get them to work. Any thoughts on this one? Thanks Edit: Ok so it's not the GIL but show().
0
1
737
0
2,744,906
0
1
0
0
3
false
2
2010-04-30T12:41:00.000
2
4
0
Python: Plot some data (matplotlib) without GIL
2,744,530
0.099668
python,matplotlib,parallel-processing,gil
It seems like the draw() method can circumvent the need for show(). The only reason left for .show() in the script is to let it do the blocking part so that the images don't disapear when the script reaches its end.
my problem is the GIL of course. While I'm analysing data it would be nice to present some plots in between (so it's not too boring waiting for results) But the GIL prevents this (and this is bringing me to the point of asking myself if Python was such a good idea in the first place). I can only display the plot, wait till the user closes it and commence calculations after that. A waste of time obviously. I already tried the subprocess and multiprocessing modules but can't seem to get them to work. Any thoughts on this one? Thanks Edit: Ok so it's not the GIL but show().
0
1
737
0
2,770,393
0
0
1
0
5
false
3
2010-05-05T01:24:00.000
1
6
0
R or Python for file manipulation
2,770,030
0.033321
python,file,r,performance
what do you mean by "file manipulation?" are you talking about moving files around, deleting, copying, etc., in which case i would use a shell, e.g., bash, etc. if you're talking about reading in the data, performing calculations, perhaps writing out a new file, etc., then you could probably use Python or R. unless maintenance is an issue, i would just leave it as R and find other fish to fry as you're not going to see enough of a speedup to justify your time and effort in porting that code.
I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?
0
1
2,365
0
2,770,138
0
0
1
0
5
false
3
2010-05-05T01:24:00.000
1
6
0
R or Python for file manipulation
2,770,030
0.033321
python,file,r,performance
Know where the time is being spent. If your R scripts are bottlenecked on disk IO (and that is very possible in this case), then you could rewrite them in hand-optimized assembly and be no faster. As always with optimization, if you don't measure first, you're just pissing into the wind. If they're not bottlenecked on disk IO, you would likely see more benefit from improving the algorithm than changing the language.
I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?
0
1
2,365
0
2,770,071
0
0
1
0
5
false
3
2010-05-05T01:24:00.000
0
6
0
R or Python for file manipulation
2,770,030
0
python,file,r,performance
My guess is that you probably won't see much of a speed-up in time. When comparing high-level languages, overhead in the language is typically not to blame for performance problems. Typically, the problem is your algorithm. I'm not very familiar with R, but you may find speed-ups by reading larger chunks of data into memory at once vs smaller chunks (less system calls). If R doesn't have the ability to change something like this, you will probably find that python can be much faster simply because of this ability.
I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?
0
1
2,365
0
2,771,903
0
0
1
0
5
false
3
2010-05-05T01:24:00.000
0
6
0
R or Python for file manipulation
2,770,030
0
python,file,r,performance
R data manipulation has rules for it to be fast. The basics are: vectorize use data.frames as little as possible (for example, in the end) Search for R time optimization and profiling and you will find many resources to help you.
I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?
0
1
2,365
0
2,770,354
0
0
1
0
5
true
3
2010-05-05T01:24:00.000
10
6
0
R or Python for file manipulation
2,770,030
1.2
python,file,r,performance
I write in both R and Python regularly. I find Python modules for writing, reading and parsing information easier to use, maintain and update. Little niceties like the way python lets you deal with lists of items over R's indexing make things much easier to read. I highly doubt you will gain any significant speed-up by switching the language. If you are becoming the new "maintainer" of these scripts and you find Python easier to understand and extend, then I'd say go for it. Computer time is cheap ... programmer time is expensive. If you have other things to do then I'd just limp along with what you've got until you have a free day to putz with them. Hope that helps.
I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?
0
1
2,365
0
2,785,460
0
0
0
0
1
false
4
2010-05-05T03:02:00.000
1
3
0
Extract points within a shape from a raster
2,770,356
0.066568
python,arcgis,raster
You need a library that can read your raster. I am not sure how to do that in Python but you could look at geotools (especially with some of the new raster library integration) if you want to program in Java. If you are good with C I would reccomend using something like GDAL. If you want to look at a desktop tool you could look at extending QGIS with python to do the operation above. If I remember correctly, the Raster extension to PostGIS may support clipping rasters based upon vectors. This means you would need to create your circles to features in the DB and then import your raster but then you might be able to use SQL to extract your values. If you are really just a text file with numbers in a grid then I would go with the suggestions above.
I have a raster file (basically 2D array) with close to a million points. I am trying to extract a circle from the raster (and all the points that lie within the circle). Using ArcGIS is exceedingly slow for this. Can anyone suggest any image processing library that is both easy to learn and powerful and quick enough for something like this? Thanks!
0
1
3,334
0
2,806,868
0
0
0
0
3
false
9
2010-05-10T22:10:00.000
14
8
0
Huge Graph Structure
2,806,806
1
python,memory,data-structures,graph
If that is 100-600 edges/node, then you are talking about 3.6 billion edges. Why does this have to be all in memory? Can you show us the structures you are currently using? How much memory are we allowed (what is the memory limit you are hitting?) If the only reason you need this in memory is because you need to be able to read and write it fast, then use a database. Databases read and write extremely fast, often they can read without going to disk at all.
I'm developing an application in which I need a structure to represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges per node) in memory. The edges representation will contain some attributes of the relation. I have tried a memory map representation, arrays, dictionaries and strings to represent that structure in memory, but these always crash because of the memory limit. I would like to get an advice of how I can represent this, or something similar. By the way, I'm using python.
0
1
6,997
0
2,806,891
0
0
0
0
3
false
9
2010-05-10T22:10:00.000
4
8
0
Huge Graph Structure
2,806,806
0.099668
python,memory,data-structures,graph
I doubt you'll be able to use a memory structure unless you have a LOT of memory at your disposal: Assume you are talking about 600 directed edges from each node, with a node being 4-bytes (integer key) and a directed edge being JUST the destination node keys (4 bytes each). Then the raw data about each node is 4 + 600 * 4 = 2404 bytes x 6,000,000 = over 14.4GB That's without any other overheads or any additional data in the nodes (or edges).
I'm developing an application in which I need a structure to represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges per node) in memory. The edges representation will contain some attributes of the relation. I have tried a memory map representation, arrays, dictionaries and strings to represent that structure in memory, but these always crash because of the memory limit. I would like to get an advice of how I can represent this, or something similar. By the way, I'm using python.
0
1
6,997
0
2,806,909
0
0
0
0
3
false
9
2010-05-10T22:10:00.000
0
8
0
Huge Graph Structure
2,806,806
0
python,memory,data-structures,graph
Sounds like you need a database and an iterator over the results. Then you wouldn't have to keep it all in memory at the same time but you could always have access to it.
I'm developing an application in which I need a structure to represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges per node) in memory. The edges representation will contain some attributes of the relation. I have tried a memory map representation, arrays, dictionaries and strings to represent that structure in memory, but these always crash because of the memory limit. I would like to get an advice of how I can represent this, or something similar. By the way, I'm using python.
0
1
6,997
0
53,035,704
0
0
0
0
1
false
24
2010-05-14T21:50:00.000
0
2
0
Dimension Reduction in Categorical Data with missing values
2,837,850
0
python,r,statistics
45% of the data have at least one missing value, you say. This is impressive. I would first look if there is no pattern. You say they are missing at random. Have you tested for MAR ? Have you tested for MAR for sub-groups ? Not knowing your data I would first look if there are not cases with many missing values and see if there are theoretical or practical reasons to exclude them. Practical reasons are the production of the data. They might be that they were not well observed, the machine producing the data did not turn all the time, the survey did not cover all countries all the time, etc. For instance, you have survey data on current occupation, but part of the respondents are retired. So they have to be (system-)missing. You can not replace these data with some computed value. Maybe you can cut slices out of the cases with full and look for the conditions of data production.
I have a regression model in which the dependent variable is continuous but ninety percent of the independent variables are categorical(both ordered and unordered) and around thirty percent of the records have missing values(to make matters worse they are missing randomly without any pattern, that is, more that forty five percent of the data hava at least one missing value). There is no a priori theory to choose the specification of the model so one of the key tasks is dimension reduction before running the regression. While I am aware of several methods for dimension reduction for continuous variables I am not aware of a similar statical literature for categorical data (except, perhaps, as a part of correspondence analysis which is basically a variation of principal component analysis on frequency table). Let me also add that the dataset is of moderate size 500000 observations with 200 variables. I have two questions. Is there a good statistical reference out there for dimension reduction for categorical data along with robust imputation (I think the first issue is imputation and then dimension reduction)? This is linked to implementation of above problem. I have used R extensively earlier and tend to use transcan and impute function heavily for continuous variables and use a variation of tree method to impute categorical values. I have a working knowledge of Python so if something is nice out there for this purpose then I will use it. Any implementation pointers in python or R will be of great help. Thank you.
0
1
16,303
0
2,943,067
0
1
0
0
1
false
3
2010-05-31T08:47:00.000
0
2
0
python dictionary with constant value-type
2,942,375
0
python,arrays,data-structures,dictionary
You might try using std::map. Boost.Python provides a Python wrapping for std::map out-of-the-box.
I bumped into a case where I need a big (=huge) python dictionary, which turned to be quite memory-consuming. However, since all of the values are of a single type (long) - as well as the keys, I figured I can use python (or numpy, doesn't really matter) array for the values ; and wrap the needed interface (in: x ; out: d[x]) with an object which actually uses these arrays for the keys and values storage. I can use a index-conversion object (input --> index, of 1..n, where n is the different-values counter), and return array[index]. I can elaborate on some techniques of how to implement such an indexing-methods with reasonable memory requirement, it works and even pretty good. However, I wonder if there is such a data-structure-object already exists (in python, or wrapped to python from C/++), in any package (I checked collections, and some Google searches). Any comment will be welcome, thanks.
0
1
1,303
0
2,960,944
0
1
0
0
1
false
1
2010-06-02T19:16:00.000
0
4
0
merging in python
2,960,855
0
python
Add all keys and associated values from both sets of data to a single dictionary. Get the items of the dictionary ans sort them. print out the answer. k1=[7, 2, 3, 5] v1=[10,11,12,26] k2=[0, 4] v2=[20, 33] d=dict(zip(k1,v1)) d.update(zip(k2,v2)) answer=d.items() answer.sort() keys=[k for (k,v) in answer] values=[v for (k,v) in answer] print keys print values Edit: This is for Python 2.6 or below which do not have any ordered dictionary.
I have the following 4 arrays ( grouped in 2 groups ) that I would like to merge in ascending order by the keys array. I can use also dictionaries as structure if it is easier. Has python any command or something to make this quickly possible? Regards MN # group 1 [7, 2, 3, 5] #keys [10,11,12,26] #values [0, 4] #keys [20, 33] #values # I would like to have [ 0, 2, 3, 4, 5, 7 ] # ordered keys [20, 11,12,33,26,33] # associated values
0
1
125
0
2,968,306
0
0
0
0
2
false
3
2010-06-03T17:06:00.000
1
7
0
How to share an array in Python with a C++ Program?
2,968,172
0.028564
c++,python,serialization
I would propose simply to use c arrays(via ctypes on the python side) and simply pull/push the raw data through an socket
I two programs running, one in Python and one in C++, and I need to share a two-dimensional array (just of decimal numbers) between them. I am currently looking into serialization, but pickle is python-specific, unfortunately. What is the best way to do this? Thanks Edit: It is likely that the array will only have 50 elements or so, but the transfer of data will need to occur very frequently: 60x per second or more.
0
1
2,433
0
2,968,375
0
0
0
0
2
false
3
2010-06-03T17:06:00.000
1
7
0
How to share an array in Python with a C++ Program?
2,968,172
0.028564
c++,python,serialization
Serialization is one problem while IPC is another. Do you have the IPC portion figured out? (pipes, sockets, mmap, etc?) On to serialization - if you're concerned about performance more than robustness (being able to plug more modules into this architecture) and security, then you should take a look at the struct module. This will let you pack data into C structures using format strings to define the structure (takes care of padding, alignment, and byte ordering for you!) In the C++ program, cast a pointer to the buffer to the corresponding structure type. This works well with a tightly-coupled Python script and C++ program that is only run internally.
I two programs running, one in Python and one in C++, and I need to share a two-dimensional array (just of decimal numbers) between them. I am currently looking into serialization, but pickle is python-specific, unfortunately. What is the best way to do this? Thanks Edit: It is likely that the array will only have 50 elements or so, but the transfer of data will need to occur very frequently: 60x per second or more.
0
1
2,433
0
2,969,618
0
0
0
0
2
true
2
2010-06-03T20:40:00.000
5
4
0
Generate n-dimensional random numbers in Python
2,969,593
1.2
python,random,n-dimensional
Numpy has multidimensional equivalents to the functions in the random module The function you're looking for is numpy.random.normal
I'm trying to generate random numbers from a gaussian distribution. Python has the very useful random.gauss() method, but this is only a one-dimensional random variable. How could I programmatically generate random numbers from this distribution in n-dimensions? For example, in two dimensions, the return value of this method is essentially distance from the mean, so I would still need (x,y) coordinates to determine an actual data point. I suppose I could generate two more random numbers, but I'm not sure how to set up the constraints. I appreciate any insights. Thanks!
0
1
5,620
0
2,969,634
0
0
0
0
2
false
2
2010-06-03T20:40:00.000
1
4
0
Generate n-dimensional random numbers in Python
2,969,593
0.049958
python,random,n-dimensional
You need to properly decompose your multi-dimensional distribution into a composition of one-dimensional distributions. For example, if you want a point at a Gaussian-distributed distance from a given center and a uniformly-distributed angle around it, you'll get the polar coordinates for the delta with a Gaussian rho and a uniform theta (between 0 and 2 pi), then, if you want cartesian coordinates, you of course do a coordinate transformation.
I'm trying to generate random numbers from a gaussian distribution. Python has the very useful random.gauss() method, but this is only a one-dimensional random variable. How could I programmatically generate random numbers from this distribution in n-dimensions? For example, in two dimensions, the return value of this method is essentially distance from the mean, so I would still need (x,y) coordinates to determine an actual data point. I suppose I could generate two more random numbers, but I'm not sure how to set up the constraints. I appreciate any insights. Thanks!
0
1
5,620
0
29,524,883
0
0
0
0
1
false
86
2010-06-03T21:19:00.000
117
5
0
How do I add space between the ticklabels and the axes in matplotlib
2,969,867
1
python,matplotlib
If you don't want to change the spacing globally (by editing your rcParams), and want a cleaner approach, try this: ax.tick_params(axis='both', which='major', pad=15) or for just x axis ax.tick_params(axis='x', which='major', pad=15) or the y axis ax.tick_params(axis='y', which='major', pad=15)
I've increased the font of my ticklabels successfully, but now they're too close to the axis. I'd like to add a little breathing room between the ticklabels and the axis.
0
1
89,545
0
2,980,269
0
0
0
1
1
true
3
2010-06-05T12:08:00.000
1
1
0
Efficient way to access a mapping of identifiers in Python
2,980,257
1.2
python,database,sqlite,dictionary,csv
As long as they will all fit in memory, a dict will be the most efficient solution. It's also a lot easier to code. 100k records should be no problem on a modern computer. You are right that switching to an SQLite database is a good choice when the number of records gets very large.
I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers. Right now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by key. My question is what is the best way to do this if the CSV file increases to 100K+ records? Would it be more efficient to convert the account mappings from a CSV to a sqlite database rather than storing them as a dict in memory?
0
1
109
0
2,991,030
0
0
0
1
1
false
3
2010-06-07T15:52:00.000
3
3
0
How to save big "database-like" class in python
2,990,995
0.197375
python,serialization,pickle,object-persistence
Pickle (cPickle) can handle any (picklable) Python object. So as long, as you're not trying to pickle thread or filehandle or something like that, you're ok.
I'm doing a project with reasonalby big DataBase. It's not a probper DB file, but a class with format as follows: DataBase.Nodes.Data=[[] for i in range(1,1000)] f.e. this DataBase is all together something like few thousands rows. Fisrt question - is the way I'm doing efficient, or is it better to use SQL, or any other "proper" DB, which I've never used actually. And the main question - I'd like to save my DataBase class with all record, and then re-open it with Python in another session. Is that possible, what tool should I use? cPickle - it seems to be only for strings, any other? In matlab there's very useful functionality named save workspace - it saves all Your variables to a file that You can open at another session - this would be vary useful in python!
0
1
306
0
2,994,438
0
0
0
0
1
true
1
2010-06-08T02:17:00.000
2
1
0
Extracting Information from Images
2,994,398
1.2
python,image,opencv,identification
Your question is difficult to answer without more clarification about the types of images you are analyzing and your purpose. The tone of the post seems that you are interested in tinkering -- that's fine. If you want to tinker, one example application might be iris identification using wavelet analysis. You can also try motion tracking; I've done that in OpenCV using the sample projects, and it is kind of interesting. You can try image segmentation for the purpose of scene analysis; take an outdoor photo and segment the image according to texture and/or color. There is no hard number for how large your training set must be. It is highly application dependent. A few hundred images may suffice.
What are some fast and somewhat reliable ways to extract information about images? I've been tinkering with OpenCV and this seems so far to be the best route plus it has Python bindings. So to be more specific I'd like to determine what I can about what's in an image. So for example the haar face detection and full body detection classifiers are great - now I can tell that most likely there are faces and / or people in the image as well as about how many. okay - what else - how about whether there are any buildings and if so what do they seem to be - huts, office buildings etc? Is there sky visible, grass, trees and so forth. From what I've read about training classifiers to detect objects, it seems like a rather laborious process 10,000 or so wrong images and 5,000 or so correct samples to train a classifier. I'm hoping that there are some decent ones around already instead of having to do this all myself for a bunch of different objects - or is there some other way to go about this sort of thing?
0
1
1,546
0
9,964,718
0
0
0
0
3
false
21
2010-06-10T06:34:00.000
0
4
0
what changes when your input is giga/terabyte sized?
3,012,157
0
python,large-data-volumes,scientific-computing
The main assumptions are about the amount of cpu/cache/ram/storage/bandwidth you can have in a single machine at an acceptable price. There are lots of answers here at stackoverflow still based on the old assumptions of a 32 bit machine with 4G ram and about a terabyte of storage and 1Gb network. With 16GB DDR-3 ram modules at 220 Eur, 512 GB ram, 48 core machines can be build at reasonable prices. The switch from hard disks to SSD is another important change.
I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)
0
1
1,855
0
3,012,350
0
0
0
0
3
false
21
2010-06-10T06:34:00.000
1
4
0
what changes when your input is giga/terabyte sized?
3,012,157
0.049958
python,large-data-volumes,scientific-computing
While some languages have naturally lower memory overhead in their types than others, that really doesn't matter for data this size - you're not holding your entire data set in memory regardless of the language you're using, so the "expense" of Python is irrelevant here. As you pointed out, there simply isn't enough address space to even reference all this data, let alone hold onto it. What this normally means is either a) storing your data in a database, or b) adding resources in the form of additional computers, thus adding to your available address space and memory. Realistically you're going to end up doing both of these things. One key thing to keep in mind when using a database is that a database isn't just a place to put your data while you're not using it - you can do WORK in the database, and you should try to do so. The database technology you use has a large impact on the kind of work you can do, but an SQL database, for example, is well suited to do a lot of set math and do it efficiently (of course, this means that schema design becomes a very important part of your overall architecture). Don't just suck data out and manipulate it only in memory - try to leverage the computational query capabilities of your database to do as much work as possible before you ever put the data in memory in your process.
I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)
0
1
1,855
0
3,012,599
0
0
0
0
3
true
21
2010-06-10T06:34:00.000
18
4
0
what changes when your input is giga/terabyte sized?
3,012,157
1.2
python,large-data-volumes,scientific-computing
I'm currently engaged in high-performance computing in a small corner of the oil industry and regularly work with datasets of the orders of magnitude you are concerned about. Here are some points to consider: Databases don't have a lot of traction in this domain. Almost all our data is kept in files, some of those files are based on tape file formats designed in the 70s. I think that part of the reason for the non-use of databases is historic; 10, even 5, years ago I think that Oracle and its kin just weren't up to the task of managing single datasets of O(TB) let alone a database of 1000s of such datasets. Another reason is a conceptual mismatch between the normalisation rules for effective database analysis and design and the nature of scientific data sets. I think (though I'm not sure) that the performance reason(s) are much less persuasive today. And the concept-mismatch reason is probably also less pressing now that most of the major databases available can cope with spatial data sets which are generally a much closer conceptual fit to other scientific datasets. I have seen an increasing use of databases for storing meta-data, with some sort of reference, then, to the file(s) containing the sensor data. However, I'd still be looking at, in fact am looking at, HDF5. It has a couple of attractions for me (a) it's just another file format so I don't have to install a DBMS and wrestle with its complexities, and (b) with the right hardware I can read/write an HDF5 file in parallel. (Yes, I know that I can read and write databases in parallel too). Which takes me to the second point: when dealing with very large datasets you really need to be thinking of using parallel computation. I work mostly in Fortran, one of its strengths is its array syntax which fits very well onto a lot of scientific computing; another is the good support for parallelisation available. I believe that Python has all sorts of parallelisation support too so it's probably not a bad choice for you. Sure you can add parallelism on to sequential systems, but it's much better to start out designing for parallelism. To take just one example: the best sequential algorithm for a problem is very often not the best candidate for parallelisation. You might be better off using a different algorithm, one which scales better on multiple processors. Which leads neatly to the next point. I think also that you may have to come to terms with surrendering any attachments you have (if you have them) to lots of clever algorithms and data structures which work well when all your data is resident in memory. Very often trying to adapt them to the situation where you can't get the data into memory all at once, is much harder (and less performant) than brute-force and regarding the entire file as one large array. Performance starts to matter in a serious way, both the execution performance of programs, and developer performance. It's not that a 1TB dataset requires 10 times as much code as a 1GB dataset so you have to work faster, it's that some of the ideas that you will need to implement will be crazily complex, and probably have to be written by domain specialists, ie the scientists you are working with. Here the domain specialists write in Matlab. But this is going on too long, I'd better get back to work
I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)
0
1
1,855
0
27,096,538
0
1
1
0
2
false
3
2010-06-10T15:58:00.000
0
5
0
Save Workspace - save all variables to a file. Python doesn't have it)
3,016,116
0
python,serialization
I take issue with the statement that the saving of variables in Matlab is an environment function. the "save" statement in matlab is a function and part of the matlab language not just a command. It is a very useful function as you don't have to worry about the trivial minutia of file i/o and it handles all sorts of variables from scalar, matrix, objects, structures.
I cannot understand it. Very simple, and obvious functionality: You have a code in any programming language, You run it. In this code You generate variables, than You save them (the values, names, namely everything) to a file, with one command. When it's saved You may open such a file in Your code also with simple command. It works perfect in matlab (save Workspace , load Workspace ) - in python there's some weird "pickle" protocol, which produces errors all the time, while all I want to do is save variable, and load it again in another session (?????) f.e. You cannot save class with variables (in Matlab there's no problem) You cannot load arrays in cPickle (but YOu can save them (?????) ) Why don't make it easier? Is there a way to save the current variables with values, and then load them?
0
1
8,265
0
3,016,188
0
1
1
0
2
false
3
2010-06-10T15:58:00.000
2
5
0
Save Workspace - save all variables to a file. Python doesn't have it)
3,016,116
0.07983
python,serialization
What you are describing is Matlab environment feature not a programming language. What you need is a way to store serialized state of some object which could be easily done in almost any programming language. In python world pickle is the easiest way to achieve it and if you could provide more details about the errors it produces for you people would probably be able to give you more details on that. In general for object oriented languages (including python) it is always a good approach to incapsulate a your state into single object that could be serialized and de-serialized and then store/load an instance of such class. Pickling and unpickling of such objects works perfectly for many developers so this must be something specific to your implementation.
I cannot understand it. Very simple, and obvious functionality: You have a code in any programming language, You run it. In this code You generate variables, than You save them (the values, names, namely everything) to a file, with one command. When it's saved You may open such a file in Your code also with simple command. It works perfect in matlab (save Workspace , load Workspace ) - in python there's some weird "pickle" protocol, which produces errors all the time, while all I want to do is save variable, and load it again in another session (?????) f.e. You cannot save class with variables (in Matlab there's no problem) You cannot load arrays in cPickle (but YOu can save them (?????) ) Why don't make it easier? Is there a way to save the current variables with values, and then load them?
0
1
8,265
0
5,926,995
0
0
0
0
1
false
1
2010-06-14T04:59:00.000
0
2
0
Can't import matplotlib
3,035,028
0
python,installation,numpy,matplotlib
Following Justin's comment ... here is the equivalent file for Linux: /usr/lib/pymodules/python2.6/matplotlib/__init__.py sudo edit that to fix the troublesome line to: if not ((int(nn[0]) >= 1 and int(nn[1]) >= 1) or int(nn[0]) >= 2): Thanks Justin Peel!
I installed matplotlib using the Mac disk image installer for MacOS 10.5 and Python 2.5. I installed numpy then tried to import matplotlib but got this error: ImportError: numpy 1.1 or later is required; you have 2.0.0.dev8462. It seems to that version 2.0.0.dev8462 would be later than version 1.1 but I am guessing that matplotlib got confused with the ".dev8462" in the version. Is there any workaround to this?
0
1
1,283
0
3,098,439
0
1
0
0
1
false
2
2010-06-23T01:31:00.000
4
5
0
Method for guessing type of data represented currently represented as strings
3,098,337
0.158649
python,parsing,csv,input,types
ast.literal_eval() can get the easy ones.
I'm currently parsing CSV tables and need to discover the "data types" of the columns. I don't know the exact format of the values. Obviously, everything that the CSV parser outputs is a string. The data types I am currently interested in are: integer floating point date boolean string My current thoughts are to test a sample of rows (maybe several hundred?) in order to determine the types of data present through pattern matching. I am particularly concerned about the date data type - is their a python module for parsing common date idioms (obviously I will not be able to detect them all)? What about integers and floats?
0
1
4,235
0
3,106,854
0
0
0
0
1
true
1
2010-06-24T02:02:00.000
1
2
0
Finding images with pure colours
3,106,788
1.2
python,image,image-processing,colors
How about doing this? Blur the image using some fast blurring algorithm. (Search for stack blur or box blur) Compute standard deviation of the pixels in RGB domain, once for each color. Discard the image if the standard deviation is beyond a certain threshold.
I've read a number of questions on finding the colour palette of an image, but my problem is slightly different. I'm looking for images made up of pure colours: pictures of the open sky, colourful photo backgrounds, red brick walls etc. So far I've used the App Engine Image.histogram() function to produce a histogram, filter out values below a certain occurrence threshold, and average the remaining ones down. That still seems to leave in a lot of extraneous photographs where there are blobs of pure colour in a mixed bag of other photos. Any ideas much appreciated!
1
1
381
0
3,134,369
0
0
0
0
1
false
1
2010-06-28T16:43:00.000
1
3
1
Switch python distributions
3,134,332
0.066568
python,osx-snow-leopard,numpy,macports
You need to update your PATH so that the stuff from MacPorts is in front of the standard system directories, e.g., export PATH=/opt/local/bin:/opt/local/sbin:/opt/local/Library/Frameworks/Python.framework/Versions/Current/bin/:$PATH. UPDATE: Pay special attention to the fact that /opt/local/Library/Frameworks/Python.framework/Versions/Current/bin is in front of your old PATH value.
I have a MacBook Pro with Snow Leopard, and the Python 2.6 distribution that comes standard. Numpy does not work properly on it. Loadtxt gives errors of the filename being too long, and getfromtxt does not work at all (no object in module error). So then I tried downloading the py26-numpy port on MacPorts. Of course when I use python, it defaults the mac distribution. How can I switch it to use the latest and greatest from MacPorts. This seems so much simpler than building all the tools I need from source... Thanks!
0
1
645
0
3,166,594
0
0
0
0
2
false
8
2010-07-01T23:48:00.000
1
4
0
Unstructured Text to Structured Data
3,162,450
0.049958
python,nlp,structured-data
Possibly look at "Collective Intelligence" by Toby Segaran. I seem to remember that addressing the basics of this in one chapter.
I am looking for references (tutorials, books, academic literature) concerning structuring unstructured text in a manner similar to the google calendar quick add button. I understand this may come under the NLP category, but I am interested only in the process of going from something like "Levi jeans size 32 A0b293" to: Brand: Levi, Size: 32, Category: Jeans, code: A0b293 I imagine it would be some combination of lexical parsing and machine learning techniques. I am rather language agnostic but if pushed would prefer python, Matlab or C++ references Thanks
0
1
7,405
0
3,177,235
0
0
0
0
2
false
8
2010-07-01T23:48:00.000
0
4
0
Unstructured Text to Structured Data
3,162,450
0
python,nlp,structured-data
If you are only working for cases like the example you cited, you are better off using some manual rule-based that is 100% predictable and covers 90% of the cases it might encounter production.. You could enumerable lists of all possible brands and categories and detect which is which in an input string cos there's usually very little intersection in these two lists.. The other two could easily be detected and extracted using regular expressions. (1-3 digit numbers are always sizes, etc) Your problem domain doesn't seem big enough to warrant a more heavy duty approach such as statistical learning.
I am looking for references (tutorials, books, academic literature) concerning structuring unstructured text in a manner similar to the google calendar quick add button. I understand this may come under the NLP category, but I am interested only in the process of going from something like "Levi jeans size 32 A0b293" to: Brand: Levi, Size: 32, Category: Jeans, code: A0b293 I imagine it would be some combination of lexical parsing and machine learning techniques. I am rather language agnostic but if pushed would prefer python, Matlab or C++ references Thanks
0
1
7,405