GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 4,567,653 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2010-12-29T19:12:00.000 | 0 | 1 | 1 | Easiest non-Java way to write HBase MapReduce on CDH3? | 4,557,045 | 1.2 | python,hadoop,mapreduce,hbase | It's not precisely an answer, but it's the closest I got --
I asked in #hbase on irc.freenode.net yesterday, and one of the Cloudera employees responded.
The "Input Splits" problem I'm having with Pig is specific to Pig 0.7, and Pig 0.8 will be bundled with Cloudera CDH3 Beta 4 (no ETA on that). Therefore, what I want to do (easily write M/R jobs using HBase tables as both sink and source) will be possible in their next release. It also seems that the HBaseStorage class will be generally improved to help with read/write operations from ANY JVM language, as well, making Jython, JRuby, Scala and Clojure all much more feasible as well.
So the answer to the question, at this time, is "Wait for CDH3 Beta 4", or if you're impatient, "Download the latest version of Pig and pray that it's compatible with your HBase" | I've been working on this for a long time, and I feel very worn out; I'm hoping for an [obvious?] insight from SO community that might get my pet project back on the move, so I can stop kicking myself. I'm using Cloudera CDH3, HBase .89 and Hadoop .20.
I have a Python/Django app that writes data to a single HBase table using the Thrift interface, and that works great. Now I want to Map/Reduce it into some more HBase tables.
The obvious answer here is either Dumbo or Apache PIG, but with Pig, the HBaseStorage adapter support isn't available for my version yet (Pig is able to load the classes and definitions, but freezes at the "Map" step, complaining about "Input Splits"; Pig mailing lists suggest this is fixed in Pig 0.8, which is incompatible with CDH3 Hadoop, so I'd have to use edge versions of everything [i think]). I can't find any information on how to make Dumbo use HBaseStorage as a data sink.
I don't care if it's Python, Ruby, Scala, Clojure, Jython, JRuby or even PHP, I just really don't want to write Java (for lots of reasons, most of them involving the sinking feeling I get every time I have to convert an Int() to IntWritable() etc).
I've tried literally every last solution and example I can find (for the last 4 weeks) for writing HBase Map/Reduce jobs in alternative languages, but everything seems to be either outdated or incomplete. Please, Stack Overflow, save me from my own devices! | 1 | 1 | 1,322 |
0 | 4,561,924 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2010-12-30T06:36:00.000 | 4 | 2 | 0 | Bell Curve Gaussian Algorithm (Python and/or C#) | 4,560,554 | 0.379949 | c#,python,algorithm | The important property of the bell curve is that it describes normal distribution, which is a simple model for many natural phenomena. I am not sure what kind of "normalization" you intend to do, but it seems to me that current score already complies with normal distribution, you just need to determine its properties (mean and variance) and scale each result accordingly. | Here's a somewhat simplified example of what I am trying to do.
Suppose I have a formula that computes credit points, but the formula has no constraints (for example, the score might be 1 to 5000). And a score is assigned to 100 people.
Now, I want to assign a "normalized" score between 200 and 800 to each person, based on a bell curve. So for example, if one guy has 5000 points, he might get an 800 on the new scale. The people with the middle of my point range will get a score near 500. In other words, 500 is the median?
A similar example might be the old scenario of "grading on the curve", where a the bulk of the students perhaps get a C or C+.
I'm not asking for the code, either a library, an algorithm book or a website to refer to.... I'll probably be writing this in Python (but C# is of some interest as well). There is NO need to graph the bell curve. My data will probably be in a database and I may have even a million people to which to assign this score, so scalability is an issue.
Thanks. | 0 | 1 | 4,989 |
0 | 19,825,314 | 0 | 0 | 0 | 0 | 1 | false | 152 | 2011-01-07T11:22:00.000 | 27 | 12 | 0 | Finding local maxima/minima with Numpy in a 1D numpy array | 4,624,970 | 1 | python,numpy | Another approach (more words, less code) that may help:
The locations of local maxima and minima are also the locations of the zero crossings of the first derivative. It is generally much easier to find zero crossings than it is to directly find local maxima and minima.
Unfortunately, the first derivative tends to "amplify" noise, so when significant noise is present in the original data, the first derivative is best used only after the original data has had some degree of smoothing applied.
Since smoothing is, in the simplest sense, a low pass filter, the smoothing is often best (well, most easily) done by using a convolution kernel, and "shaping" that kernel can provide a surprising amount of feature-preserving/enhancing capability. The process of finding an optimal kernel can be automated using a variety of means, but the best may be simple brute force (plenty fast for finding small kernels). A good kernel will (as intended) massively distort the original data, but it will NOT affect the location of the peaks/valleys of interest.
Fortunately, quite often a suitable kernel can be created via a simple SWAG ("educated guess"). The width of the smoothing kernel should be a little wider than the widest expected "interesting" peak in the original data, and its shape will resemble that peak (a single-scaled wavelet). For mean-preserving kernels (what any good smoothing filter should be) the sum of the kernel elements should be precisely equal to 1.00, and the kernel should be symmetric about its center (meaning it will have an odd number of elements.
Given an optimal smoothing kernel (or a small number of kernels optimized for different data content), the degree of smoothing becomes a scaling factor for (the "gain" of) the convolution kernel.
Determining the "correct" (optimal) degree of smoothing (convolution kernel gain) can even be automated: Compare the standard deviation of the first derivative data with the standard deviation of the smoothed data. How the ratio of the two standard deviations changes with changes in the degree of smoothing cam be used to predict effective smoothing values. A few manual data runs (that are truly representative) should be all that's needed.
All the prior solutions posted above compute the first derivative, but they don't treat it as a statistical measure, nor do the above solutions attempt to performing feature preserving/enhancing smoothing (to help subtle peaks "leap above" the noise).
Finally, the bad news: Finding "real" peaks becomes a royal pain when the noise also has features that look like real peaks (overlapping bandwidth). The next more-complex solution is generally to use a longer convolution kernel (a "wider kernel aperture") that takes into account the relationship between adjacent "real" peaks (such as minimum or maximum rates for peak occurrence), or to use multiple convolution passes using kernels having different widths (but only if it is faster: it is a fundamental mathematical truth that linear convolutions performed in sequence can always be convolved together into a single convolution). But it is often far easier to first find a sequence of useful kernels (of varying widths) and convolve them together than it is to directly find the final kernel in a single step.
Hopefully this provides enough info to let Google (and perhaps a good stats text) fill in the gaps. I really wish I had the time to provide a worked example, or a link to one. If anyone comes across one online, please post it here! | Can you suggest a module function from numpy/scipy that can find local maxima/minima in a 1D numpy array? Obviously the simplest approach ever is to have a look at the nearest neighbours, but I would like to have an accepted solution that is part of the numpy distro. | 0 | 1 | 290,777 |
0 | 4,630,743 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2011-01-07T22:07:00.000 | 4 | 5 | 0 | Using Python for quasi randomization | 4,630,723 | 0.158649 | python,random | To guarantee that there will be the same number of zeros and ones you can generate a list containing n/2 zeros and n/2 ones and shuffle it with random.shuffle.
For small n, if you aren't happy that the result passes your acceptance criteria (e.g. not too many consecutive equal numbers), shuffle again. Be aware that doing this reduces the randomness of the result, not increases it.
For larger n it will take too long to find a result that passes your criteria using this method (because most results will fail). Instead you could generate elements one at a time with these rules:
If you already generated 4 ones in a row the next number must be zero and vice versa.
Otherwise, if you need to generate x more ones and y more zeros, the chance of the next number being one is x/(x+y). | Here's the problem: I try to randomize n times a choice between two elements (let's say [0,1] -> 0 or 1), and my final list will have n/2 [0] + n/2 [1]. I tend to have this kind of result: [0 1 0 0 0 1 0 1 1 1 1 1 1 0 0, until n]: the problem is that I don't want to have serially 4 or 5 times the same number so often. I know that I could use a quasi randomisation procedure, but I don't know how to do so (I'm using Python). | 0 | 1 | 588 |
0 | 4,630,745 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2011-01-07T22:07:00.000 | 1 | 5 | 0 | Using Python for quasi randomization | 4,630,723 | 0.039979 | python,random | Having 6 1's in a row isn't particularly improbable -- are you sure you're not getting what you want?
There's a simple Python interface for a uniformly distributed random number, is that what you're looking for? | Here's the problem: I try to randomize n times a choice between two elements (let's say [0,1] -> 0 or 1), and my final list will have n/2 [0] + n/2 [1]. I tend to have this kind of result: [0 1 0 0 0 1 0 1 1 1 1 1 1 0 0, until n]: the problem is that I don't want to have serially 4 or 5 times the same number so often. I know that I could use a quasi randomisation procedure, but I don't know how to do so (I'm using Python). | 0 | 1 | 588 |
1 | 4,637,207 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2011-01-09T01:31:00.000 | 0 | 3 | 0 | Python Fast monochromatic bitmap | 4,637,190 | 0 | python,bitmap,performance | One thing I might suggest is using Python's built-in array class (http://docs.python.org/library/array.html), with a type of 'B'. Coding will be simplest if you use one byte per pixel, but if you want to save memory, you can pack 8 to a byte, and access using your own bit manipulation. | i want to implement 1024x1024 monochromatic grid , i need read data from any cell and insert rectangles with various dimensions, i have tried to make list in list ( and use it like 2d array ), what i have found is that list of booleans is slower than list of integers.... i have tried 1d list, and it was slower than 2d one, numpy is slower about 10 times that standard python list, fastest way that i have found is PIL and monochromatic bitmap used with "load" method, but i want it to run a lot faster, so i have tried to compile it with shedskin, but unfortunately there is no pil support there, do you know any way of implementing such grid faster without rewriting it to c or c++ ? | 0 | 1 | 651 |
1 | 4,637,284 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2011-01-09T01:31:00.000 | 2 | 3 | 0 | Python Fast monochromatic bitmap | 4,637,190 | 0.132549 | python,bitmap,performance | Raph's suggestin of using array is good, but it won't help on CPython, in fact I'd expect it to be 10-15% slower, however if you use it on PyPy (http://pypy.org/) I'd expect excellent results. | i want to implement 1024x1024 monochromatic grid , i need read data from any cell and insert rectangles with various dimensions, i have tried to make list in list ( and use it like 2d array ), what i have found is that list of booleans is slower than list of integers.... i have tried 1d list, and it was slower than 2d one, numpy is slower about 10 times that standard python list, fastest way that i have found is PIL and monochromatic bitmap used with "load" method, but i want it to run a lot faster, so i have tried to compile it with shedskin, but unfortunately there is no pil support there, do you know any way of implementing such grid faster without rewriting it to c or c++ ? | 0 | 1 | 651 |
0 | 4,880,177 | 0 | 0 | 0 | 0 | 1 | false | 16 | 2011-01-14T11:32:00.000 | 4 | 4 | 0 | Is there a matplotlib flowable for ReportLab? | 4,690,585 | 0.197375 | python,matplotlib,reportlab | There is not one, but what I do in my own use of MatPlotLib with ReportLab is to generate PNGs and then embed the PNGs so that I don't need to also use PIL. However, if you do use PIL, I believe you should be able to generate and embed EPS using MatPlotLib and ReportLab. | I want to embed matplotlib charts into PDFs generated by ReportLab directly - i.e. not saving as a PNG first and then embedding the PNG into the PDF (i think I'll get better quality output).
Does anyone know if there's a matplotlib flowable for ReportLab?
Thanks | 0 | 1 | 10,096 |
0 | 4,695,834 | 0 | 0 | 0 | 0 | 1 | true | 31 | 2011-01-14T19:58:00.000 | 7 | 3 | 0 | expanding (adding a row or column) a scipy.sparse matrix | 4,695,337 | 1.2 | python,scipy,sparse-matrix | I don't think that there is any way to really escape from doing the copying. Both of those types of sparse matrices store their data as Numpy arrays (in the data and indices attributes for csr and in the data and rows attributes for lil) internally and Numpy arrays can't be extended.
Update with more information:
LIL does stand for LInked List, but the current implementation doesn't quite live up to the name. The Numpy arrays used for data and rows are both of type object. Each of the objects in these arrays are actually Python lists (an empty list when all values are zero in a row). Python lists aren't exactly linked lists, but they are kind of close and quite frankly a better choice due to O(1) look-up. Personally, I don't immediately see the point of using a Numpy array of objects here rather than just a Python list. You could fairly easily change the current lil implementation to use Python lists instead which would allow you to add a row without copying the whole matrix. | Suppose I have a NxN matrix M (lil_matrix or csr_matrix) from scipy.sparse, and I want to make it (N+1)xN where M_modified[i,j] = M[i,j] for 0 <= i < N (and all j) and M[N,j] = 0 for all j. Basically, I want to add a row of zeros to the bottom of M and preserve the remainder of the matrix. Is there a way to do this without copying the data? | 0 | 1 | 27,075 |
0 | 4,710,783 | 0 | 1 | 0 | 0 | 1 | false | 1,378 | 2011-01-15T16:10:00.000 | 2 | 17 | 0 | How to put the legend outside the plot in Matplotlib | 4,700,614 | 0.023525 | python,matplotlib,legend | You can also try figlegend. It is possible to create a legend independent of any Axes object. However, you may need to create some "dummy" Paths to make sure the formatting for the objects gets passed on correctly. | I have a series of 20 plots (not subplots) to be made in a single figure. I want the legend to be outside of the box. At the same time, I do not want to change the axes, as the size of the figure gets reduced. Kindly help me for the following queries:
I want to keep the legend box outside the plot area. (I want the legend to be outside at the right side of the plot area).
Is there anyway that I reduce the font size of the text inside the legend box, so that the size of the legend box will be small. | 0 | 1 | 1,360,273 |
0 | 4,758,849 | 0 | 0 | 0 | 0 | 3 | false | 8 | 2011-01-21T12:17:00.000 | 2 | 6 | 0 | I want to use NumPy/SciPy. Should I use Python 2 or 3? | 4,758,693 | 0.066568 | python,numpy,python-3.x | I am quite conservative in this respect, and so I use Python 2.6. That's what comes pre-installed on my Linux box, and it is also the target version for the latest binary releases of SciPy.
Python 3 is without a doubt a huge step forward, but if you do mainly numerical stuff with NumPy and SciPy, I'd still go for Python 2. | I am about to embark on some signal processing work using NumPy/SciPy. However, I have never used Python before and don't know where to start.
I see there are currently two branches of Python in this world: Version 2.x and 3.x.
Being a neophile, I instinctively tend to go for the newer one, but there seems to be a lot of talk about incompatibilities between the two. Numpy seems to be compatible with Python 3. I can't find any documents on SciPy.
Would you recommend to go with Python 3 or 2?
(could you point me to some resources to get started? I know C/C++, Ruby, Matlab and some other stuff and basically want to use NumPy instead of Matlab.) | 0 | 1 | 5,690 |
0 | 4,758,906 | 0 | 0 | 0 | 0 | 3 | false | 8 | 2011-01-21T12:17:00.000 | 2 | 6 | 0 | I want to use NumPy/SciPy. Should I use Python 2 or 3? | 4,758,693 | 0.066568 | python,numpy,python-3.x | I can recommend Using py3k over py2.6 if possible. Especially if you're a new user, since some of the syntax changes in py3k and it'll be harder to get used the new syntax if you're starting out learning the old.
The modules you mention all have support for py3k but as SilentGhost noted you might want to check for compatibility with plotting libraries too. | I am about to embark on some signal processing work using NumPy/SciPy. However, I have never used Python before and don't know where to start.
I see there are currently two branches of Python in this world: Version 2.x and 3.x.
Being a neophile, I instinctively tend to go for the newer one, but there seems to be a lot of talk about incompatibilities between the two. Numpy seems to be compatible with Python 3. I can't find any documents on SciPy.
Would you recommend to go with Python 3 or 2?
(could you point me to some resources to get started? I know C/C++, Ruby, Matlab and some other stuff and basically want to use NumPy instead of Matlab.) | 0 | 1 | 5,690 |
0 | 4,758,785 | 0 | 0 | 0 | 0 | 3 | true | 8 | 2011-01-21T12:17:00.000 | 3 | 6 | 0 | I want to use NumPy/SciPy. Should I use Python 2 or 3? | 4,758,693 | 1.2 | python,numpy,python-3.x | Both scipy and numpy are compatible with py3k. However, if you'll need to plot stuff: matplotlib is not yet officially compatible with py3k. So, it'll depend on whether your signalling processing involves plotting.
Syntactic differences are not that great between the two version. | I am about to embark on some signal processing work using NumPy/SciPy. However, I have never used Python before and don't know where to start.
I see there are currently two branches of Python in this world: Version 2.x and 3.x.
Being a neophile, I instinctively tend to go for the newer one, but there seems to be a lot of talk about incompatibilities between the two. Numpy seems to be compatible with Python 3. I can't find any documents on SciPy.
Would you recommend to go with Python 3 or 2?
(could you point me to some resources to get started? I know C/C++, Ruby, Matlab and some other stuff and basically want to use NumPy instead of Matlab.) | 0 | 1 | 5,690 |
0 | 4,759,549 | 0 | 0 | 1 | 0 | 3 | false | 21 | 2011-01-21T13:47:00.000 | 0 | 8 | 0 | Java or Python for math? | 4,759,485 | 0 | java,python,math,stocks | What is more important for you?
If it's rapid application development, I found Python significantly easier to code for than Java - and I was just learning Python, while I had been coding on Java for years.
If it's application speed and the ability to reuse existing code, then you should probably stick with Java. It's reasonably fast and many research efforts at the moment use Java as their language of choice. | I'm trying to write a pretty heavy duty math-based project, which will parse through about 100MB+ data several times a day, so, I need a fast language that's pretty easy to use. I would have gone with C, but, getting a large project done in C is very difficult, especially with the low level programming getting in your way. So, I was about python or java. Both are well equiped with OO features, so I don't mind that. Now, here are my pros for choosing python:
Very easy to use language
Has a pretty large library of useful stuff
Has an easy to use plotting library
Here are the cons:
Not exactly blazing
There isn't a native python neural network library that is active
I can't close source my code without going through quite a bit of trouble
Deploying python code on clients computers is hard to deal with, especially when clients are idiots.
Here are the pros for choosing Java:
Huge library
Well supported
Easy to deploy
Pretty fast, possibly even comparable to C++
The Encog Neural Network Library is really active and pretty awesome
Networking support is really good
Strong typing
Here are the cons for Java:
I can't find a good graphing library like matplotlib for python
No built in support for big integers, that means another dependency (I mean REALLY big integers, not just math.BigInteger size)
File IO is kind of awkward compared to Python
Not a ton of array manipulating or "make programming easy" type of features that python has.
So, I was hoping you guys can tell me what to use. I'm equally familiar with both languages. Also, suggestions for other languages is great too.
EDIT: WOW! you guys are fast! 30 mins at 10 responses! | 0 | 1 | 12,225 |
0 | 4,759,596 | 0 | 0 | 1 | 0 | 3 | false | 21 | 2011-01-21T13:47:00.000 | 1 | 8 | 0 | Java or Python for math? | 4,759,485 | 0.024995 | java,python,math,stocks | If those are the choices, then Java should be the faster for math intensive work. It is compiled (although yes it is still running byte code).
Exelian mentions NumPy. There's also the SciPy package. Both are worth looking at but only really seem to give speed improvements for work with lots of arrays and vector processing.
When I tried using these with NLTK for a math-intensive routine, I found there wasn't that much of a speedup.
For math intensive work these days, I'd be using C/C++ or C# (personally I prefer C# over Java although that shouldn't affect your decision). My first employer out of univ. paid me to use Fortran for stuff that is almost certainly more math intensive than anything you're thinking of. Don't laugh - the Fortran compilers are some of the best for math processing on heavy iron. | I'm trying to write a pretty heavy duty math-based project, which will parse through about 100MB+ data several times a day, so, I need a fast language that's pretty easy to use. I would have gone with C, but, getting a large project done in C is very difficult, especially with the low level programming getting in your way. So, I was about python or java. Both are well equiped with OO features, so I don't mind that. Now, here are my pros for choosing python:
Very easy to use language
Has a pretty large library of useful stuff
Has an easy to use plotting library
Here are the cons:
Not exactly blazing
There isn't a native python neural network library that is active
I can't close source my code without going through quite a bit of trouble
Deploying python code on clients computers is hard to deal with, especially when clients are idiots.
Here are the pros for choosing Java:
Huge library
Well supported
Easy to deploy
Pretty fast, possibly even comparable to C++
The Encog Neural Network Library is really active and pretty awesome
Networking support is really good
Strong typing
Here are the cons for Java:
I can't find a good graphing library like matplotlib for python
No built in support for big integers, that means another dependency (I mean REALLY big integers, not just math.BigInteger size)
File IO is kind of awkward compared to Python
Not a ton of array manipulating or "make programming easy" type of features that python has.
So, I was hoping you guys can tell me what to use. I'm equally familiar with both languages. Also, suggestions for other languages is great too.
EDIT: WOW! you guys are fast! 30 mins at 10 responses! | 0 | 1 | 12,225 |
0 | 4,765,700 | 0 | 0 | 1 | 0 | 3 | false | 21 | 2011-01-21T13:47:00.000 | 0 | 8 | 0 | Java or Python for math? | 4,759,485 | 0 | java,python,math,stocks | The Apache Commons Math picked up where JAMA left off. They are quite capable for scientific computing.
So is Python - NumPy and SciPy are excellent. I also like the fact that Python is a hybrid of object-orientation and functional programming. Functional programming is awfully handy for numerical methods.
I'd recommend using the one that you know best, but if the choice is a toss up I might lean towards Python. | I'm trying to write a pretty heavy duty math-based project, which will parse through about 100MB+ data several times a day, so, I need a fast language that's pretty easy to use. I would have gone with C, but, getting a large project done in C is very difficult, especially with the low level programming getting in your way. So, I was about python or java. Both are well equiped with OO features, so I don't mind that. Now, here are my pros for choosing python:
Very easy to use language
Has a pretty large library of useful stuff
Has an easy to use plotting library
Here are the cons:
Not exactly blazing
There isn't a native python neural network library that is active
I can't close source my code without going through quite a bit of trouble
Deploying python code on clients computers is hard to deal with, especially when clients are idiots.
Here are the pros for choosing Java:
Huge library
Well supported
Easy to deploy
Pretty fast, possibly even comparable to C++
The Encog Neural Network Library is really active and pretty awesome
Networking support is really good
Strong typing
Here are the cons for Java:
I can't find a good graphing library like matplotlib for python
No built in support for big integers, that means another dependency (I mean REALLY big integers, not just math.BigInteger size)
File IO is kind of awkward compared to Python
Not a ton of array manipulating or "make programming easy" type of features that python has.
So, I was hoping you guys can tell me what to use. I'm equally familiar with both languages. Also, suggestions for other languages is great too.
EDIT: WOW! you guys are fast! 30 mins at 10 responses! | 0 | 1 | 12,225 |
0 | 52,617,040 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2011-01-25T00:22:00.000 | 0 | 3 | 0 | Unable to import matplotlib in PyDev | 4,788,748 | 0 | python,module,import,matplotlib,pydev | Right click your project, then go to properties, then click PyDev - Interpreter/Grammar, click "Click here to configure an interpreter not listed". Then select the interpreter you are using, click Install/Uninstall with pip, then enter matplotlib for . Then restart Eclipse and it should work. | I am using Ubuntu 10.04 and have successfully configured PyDev to work with Python and have written a few simple example projects. Now I am trying to incorporate numpy and matplotlib. I have gotten numpy installed and within PyDev I did not need to alter any paths, etc., and after the installation of numpy I was automatically able to import numpy with no problem. However, following the same procedure with matplotlib hasn't worked. If I run Python from the command line, then import matplotlib works just fine. But within PyDev, I just get the standard error where it can't locate matplotlib when I try import matplotlib.
Since numpy didn't require any alteration of the PYTHONPATH, I feel that neither should matplotlib, so can anyone help me figure out why matplotlib isn't accessible from within my existing project while numpy is? Thanks for any help. | 0 | 1 | 3,642 |
0 | 6,766,298 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2011-01-25T00:22:00.000 | 1 | 3 | 0 | Unable to import matplotlib in PyDev | 4,788,748 | 0.066568 | python,module,import,matplotlib,pydev | I added numpy to the Forced Builtins and worked like charm. | I am using Ubuntu 10.04 and have successfully configured PyDev to work with Python and have written a few simple example projects. Now I am trying to incorporate numpy and matplotlib. I have gotten numpy installed and within PyDev I did not need to alter any paths, etc., and after the installation of numpy I was automatically able to import numpy with no problem. However, following the same procedure with matplotlib hasn't worked. If I run Python from the command line, then import matplotlib works just fine. But within PyDev, I just get the standard error where it can't locate matplotlib when I try import matplotlib.
Since numpy didn't require any alteration of the PYTHONPATH, I feel that neither should matplotlib, so can anyone help me figure out why matplotlib isn't accessible from within my existing project while numpy is? Thanks for any help. | 0 | 1 | 3,642 |
0 | 4,999,217 | 0 | 0 | 0 | 0 | 3 | true | 0 | 2011-01-25T00:22:00.000 | 2 | 3 | 0 | Unable to import matplotlib in PyDev | 4,788,748 | 1.2 | python,module,import,matplotlib,pydev | Sounds like the interpreter you setup for Pydev is not pointing to the appropriate version of python (that you've install mpl and np). In the terminal, it's likely the effect of typing python is tantamount to env python; pydev might not be using this interpreter.
But, if the pydev interpreter is pointed to the right location, you might simply have to rehash the interpreter (basically, set it up again) to have mpl show up.
You could try this in the terminal and see if the results are different:
python -c 'import platform; print platform.python_version()'
${PYTHONPATH}/python -c 'import platform; print platform.python_version()' | I am using Ubuntu 10.04 and have successfully configured PyDev to work with Python and have written a few simple example projects. Now I am trying to incorporate numpy and matplotlib. I have gotten numpy installed and within PyDev I did not need to alter any paths, etc., and after the installation of numpy I was automatically able to import numpy with no problem. However, following the same procedure with matplotlib hasn't worked. If I run Python from the command line, then import matplotlib works just fine. But within PyDev, I just get the standard error where it can't locate matplotlib when I try import matplotlib.
Since numpy didn't require any alteration of the PYTHONPATH, I feel that neither should matplotlib, so can anyone help me figure out why matplotlib isn't accessible from within my existing project while numpy is? Thanks for any help. | 0 | 1 | 3,642 |
0 | 10,869,425 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2011-01-25T02:07:00.000 | 1 | 3 | 0 | Orange vs NLTK for Content Classification in Python | 4,789,318 | 0.066568 | python,machine-learning,nltk,naivebayes,orange | NLTK is a toolkit that supports a four state model of natural language processing:
Tokenizing: grouping characters as words. This ranges from trivial regex stuff to dealing with contractions like "can't"
Tagging. This is applying part-of-speech tags to the tokens (eg "NN" for noun, "VBG" for verb gerund). This is typically done by training a model (eg Hidden Markov) on a training corpus (i.e. large list of by by hand tagged sentences).
Chunking/Parsing. This is taking each tagged sentence and extracting features into a tree (eg noun phrases). This can be according to a hand-written grammar or a one trained on a corpus.
Information extraction. This is traversing the tree and extracting the data. This is where your specific orange=fruit would be done.
NLTK supports WordNet, a huge semantic dictionary that classifies words. So there are 5 noun definitions for orange (fruit, tree, pigment, color, river in South Africa). Each of these has one or more 'hypernym paths' that are hierarchies of classifications. E.g. the first sense of 'orange' has a two paths:
orange/citrus/edible_fruit/fruit/reproductive_structure/plant_organ/plant_part/natural_object/whole/object/physical_entity/entity
and
orange/citrus/edible_fruit/produce/food/solid/matter/physical_entity/entity
Depending on your application domain you can identify orange as a fruit, or a food, or a plant thing. Then you can use the chunked tree structure to determine more (who did what to the fruit, etc.) | We need a content classification module. Bayesian classifier seems to be what I am looking for. Should we go for Orange or NLTK ? | 0 | 1 | 3,710 |
0 | 4,789,820 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2011-01-25T02:07:00.000 | 3 | 3 | 0 | Orange vs NLTK for Content Classification in Python | 4,789,318 | 0.197375 | python,machine-learning,nltk,naivebayes,orange | I don't know Orange, but +1 for NLTK:
I've successively used the classification tools in NLTK to classify text and related meta data. Bayesian is the default but there are other alternatives such as Maximum Entropy. Also being a toolkit, you can customize as you see fit - eg. creating your own features (which is what I did for the meta data).
NLTK also has a couple of good books - one of which is available under Creative Commons (as well as O'Reilly). | We need a content classification module. Bayesian classifier seems to be what I am looking for. Should we go for Orange or NLTK ? | 0 | 1 | 3,710 |
0 | 4,855,557 | 0 | 1 | 0 | 0 | 1 | false | 7 | 2011-01-31T20:07:00.000 | 1 | 4 | 0 | Parsing CSV data from memory in Python | 4,855,523 | 0.049958 | python,csv | Use the stringio module, which allows you to dress strings as file-like objects. That way you can pass a stringio "file" to the CSV module for parsing (or any other parser you may be using). | Is there a way to parse CSV data in Python when the data is not in a file? I'm storing CSV data in my database and I'd like to parse it. I'm looking for something analogous to Ruby's CSV.parse. I know Python has a CSV class but everything I've seen in the docs seems to deal with files as opposed to in-memory CSV data.
(And it's not an option to parse the data before it goes into the database.)
(And please don't tell me not to store the CSV data in the database. I know what I'm doing as far as the database goes.) | 0 | 1 | 6,225 |
0 | 4,895,933 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2011-02-04T07:45:00.000 | 0 | 2 | 0 | How would I go about categorizing sentences according to tense (present, past, future, etc.)? | 4,895,661 | 0 | python,nlp,grammar,nltk | May be you need simply define patterns like "noun verb noun" etc for each type of grammatical structure and search matches in part-of-speach tagger output sequence. | I want to parse a text and categorize the sentences according to their grammatical structure, but I have a very small understanding of NLP so I don't even know where to start.
As far as I have read, I need to parse the text and find out (or tag?) the part-of-speech of every word. Then I search for the verb clause or whatever other defining characteristic I want to use to categorize the sentences.
What I don't know is if there is already some method to do this more easily or if I need to define the grammar rules separately or what.
Any resources on NLP that discuss this would be great. Program examples are welcome as well. I have used NLTK before, but not extensively. Other parsers or languages are OK too! | 0 | 1 | 1,212 |
0 | 4,908,925 | 0 | 0 | 0 | 0 | 1 | false | 17 | 2011-02-05T05:50:00.000 | 8 | 3 | 0 | How to incrementally train an nltk classifier | 4,905,368 | 1 | python,nltk | There's 2 options that I know of:
1) Periodically retrain the classifier on the new data. You'd accumulate new training data in a corpus (that already contains the original training data), then every few hours, retrain & reload the classifier. This is probably the simplest solution.
2) Externalize the internal model, then update it manually. The NaiveBayesClassifier can be created directly by giving it a label_prodist and a feature_probdist. You could create these separately, pass them in to a NaiveBayesClassifier, then update them whenever new data comes in. The classifier would use this new data immediately. You'd have to look at the train method for details on how to update the probability distributions. | I am working on a project to classify snippets of text using the python nltk module and the naivebayes classifier. I am able to train on corpus data and classify another set of data but would like to feed additional training information into the classifier after initial training.
If I'm not mistaken, there doesn't appear to be a way to do this, in that the NaiveBayesClassifier.train method takes a complete set of training data. Is there a way to add to the the training data without feeding in the original featureset?
I'm open to suggestions including other classifiers that can accept new training data over time. | 0 | 1 | 3,608 |
0 | 4,909,786 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2011-02-05T21:23:00.000 | 2 | 2 | 0 | Python: Multi-dimensional array of different types | 4,909,748 | 0.197375 | python,arrays | In many cases such arrays are not required as there are more elegant solutions to these problems. Explain what you want to do so someone can give some hints.
Anyway, if you really, really need such data structure, use array.array. | Is it possible to create a multi-dimensional array of different types in Python? The way I usually solve it is [([None] * n) for i in xrange(m)], but I don't want to use list. I want something which is really a continuous array of pointers in memory, not a list. (Each list itself is continuous, but when you make a list of lists, the different lists may be spread in different places in RAM.)
Also, writing [([None] * n) for i in xrange(m)] is quite a convoluted way of initializing an empty array, in contrast to something like empty_array(m, n). Is there a better alternative? | 0 | 1 | 5,249 |
0 | 4,919,289 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2011-02-05T23:48:00.000 | 1 | 2 | 0 | Comparing/Clustering Trajectories (GPS data of (x,y) points) and Mining the data | 4,910,510 | 0.099668 | python,algorithm,gps,gis,data-mining | 1) Extracting trajectories
I think you are in right direction. There are probably will be some noise in gps data, and random walking, you should do some smooth like splines to overcome it.
2) Mining the trajectories
Is there are any business sense in similar trajectories? (This will help build distance metric and then you can use some of mahoot clustering algorithms)
1. I think point where some person stoped are more interesting so you can generate statistics for popularity of places.
2. If you need route similarity to find different paths to same start-end you need to cluster first start end location and then similare curves by (maximum distance beetween, integral distance - some of well known functional metrics) | I've got 2 questions on analyzing a GPS dataset.
1) Extracting trajectories I have a huge database of recorded GPS coordinates of the form (latitude, longitude, date-time). According to date-time values of consecutive records, I'm trying to extract all trajectories/paths followed by the person. For instance; say from time M, the (x,y) pairs are continuously changing up until time N. After N, the change in (x,y) pairs decrease, at which point I conclude that the path taken from time M to N can be called a trajectory. Is that a decent approach to follow when extracting trajectories? Are there any well-known approaches/methods/algorithms you can suggest? Are there any data structures or formats you would like to suggest me to maintain those points in an efficient manner? Perhaps, for each trajectory, figuring out the velocity and acceleration would be useful?
2) Mining the trajectories Once I have all the trajectories followed/paths taken, how can I compare/cluster them? I would like to know if the start or end points are similar, then how do the intermediate paths compare?
How do I compare the 2 paths/routes and conclude if they are similar or not. Furthermore; how do I cluster similar paths together?
I would highly appreciate it if you can point me to a research or something similar on this matter.
The development will be in Python, but all kinds of library suggestions are welcome.
Thanks in advance. | 0 | 1 | 6,451 |
0 | 4,934,824 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2011-02-08T15:03:00.000 | 6 | 1 | 0 | Is there a way to extract text information from a postscript file? (.ps .eps) | 4,934,669 | 1.2 | python,image,text,postscript | It's likely that pgplot drew the fonts in the text directly with lines rather than using text. Especially since pgplot is designed to output to a huge range of devices including plotters where you would have to do this.
Edit:
If you have enough plots to be worth
the effort than it's a very simple
image processing task. Convert each
page to something like tiff, in mono
chrome Threshold the image to binary,
the text will be max pixel value.
Use a template matching technique.
If you have a limited set of
possible labels then just match the
entire label, you can even start
with a template of the correct size
and rotation. Then just flag each
plot as containing label[1-n], no
need to read the actual text.
If you
don't know the label then you can
still do OCR fairly easily, just
extract the region around the axis,
rotate it for the vertical - and use
Google's free OCR lib
If you have pgplot you can even
build the training set for OCR or
the template images directly rather
than having to harvest them from the
image list | I want to extract the text information contained in a postscript image file (the captions to my axis labels).
These images were generated with pgplot. I have tried ps2ascii and ps2txt on Ubuntu but they didn't produce any useful results. Does anyone know of another method?
Thanks | 0 | 1 | 1,410 |
1 | 4,938,599 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2011-02-08T21:17:00.000 | 5 | 2 | 0 | How do I control where an R plot is displayed, using python and rpy2? | 4,938,556 | 1.2 | python,r,plot,rpy2 | Plot to a graphics file using jpeg(), png() or another device, then display that file on your wxWidget. | I'm writing a program in Python. The first thing that happens is a window is displayed (I'm using wxPython) that has some buttons and text. When the user performs some actions, a plot is displayed in its own window. This plot is made with R, using rpy2. The problem is that the plot usually pops up on top of the main window, so the user has to move the plot to see the main window again. This is a big problem for the user, because he's lazy and good-for-nothing. He wants the plot to simply appear somewhere else, so he can see the main window and the plot at the same time, without having to lift a finger.
Two potential solutions to my problem are:
(1) display the plot within a wxPython frame (which I think I could control the location of), or
(2) be able to specify where on the screen the plot window appears.
I can't figure out how to do either. | 0 | 1 | 757 |
0 | 5,640,511 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2011-02-09T03:56:00.000 | 1 | 1 | 0 | How to draw matplotlib graph in eclipse? | 4,941,162 | 1.2 | python,eclipse,matplotlib | import matplotlib.pyplot as mp
mp.ion()
mp.plot(x,y)
mp.show() | I can draw matplotlib graph in command line(shell) environment, but I find that I could not draw the same graph inside the eclipse IDE. such as plot([1,2,3]) not show in eclipse, I writed show() in the end but still not show anything
my matplotlib use GTKAgg as backend, I use Pydev as plugin of eclipse to develop python. | 0 | 1 | 3,911 |
0 | 4,962,223 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2011-02-10T20:21:00.000 | 1 | 5 | 0 | Python - Dijkstra's Algorithm | 4,962,202 | 0.039979 | python,dijkstra | Encapsulate that information in a Python object and you should be fine. | I need to implement Dijkstra's Algorithm in Python. However, I have to use a 2D array to hold three pieces of information - predecessor, length and unvisited/visited.
I know in C a Struct can be used, though I am stuck on how I can do a similar thing in Python, I am told it's possible but I have no idea to be honest | 0 | 1 | 5,571 |
0 | 4,962,291 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2011-02-10T20:21:00.000 | 0 | 5 | 0 | Python - Dijkstra's Algorithm | 4,962,202 | 0 | python,dijkstra | Python is object oriented language. So think of it like moving from Structs in C to Classes of C++. You can use the same class structure in Python as well. | I need to implement Dijkstra's Algorithm in Python. However, I have to use a 2D array to hold three pieces of information - predecessor, length and unvisited/visited.
I know in C a Struct can be used, though I am stuck on how I can do a similar thing in Python, I am told it's possible but I have no idea to be honest | 0 | 1 | 5,571 |
0 | 38,577,661 | 0 | 1 | 0 | 0 | 4 | false | 7 | 2011-02-11T17:23:00.000 | 2 | 5 | 0 | Unable to reinstall PyTables for Python 2.7 | 4,972,079 | 0.07983 | python,hdf5,pytables | Do the following steps:
brew tap homebrew/science
brew install hdf5
see where hdf5 is installed, it shows at the end of second step
export HDF5_DIR=/usr/local/Cellar/hdf5/1.8.16_1/ (Depending on the location that is installed on your computer)
This one worked for me on MAC :-) | I am installing Python 2.7 in addition to 2.7. When installing PyTables again for 2.7, I get this error -
Found numpy 1.5.1 package installed.
.. ERROR:: Could not find a local HDF5 installation.
You may need to explicitly state where your local HDF5 headers and
library can be found by setting the HDF5_DIR environment
variable or by using the --hdf5 command-line option.
I am not clear on the HDF installation. I downloaded again - and copied it into a /usr/local/hdf5 directory. And tried to set the environement vars as suggested in the PyTable install. Has anyone else had this problem that could help? | 0 | 1 | 6,903 |
0 | 29,871,380 | 0 | 1 | 0 | 0 | 4 | false | 7 | 2011-02-11T17:23:00.000 | 2 | 5 | 0 | Unable to reinstall PyTables for Python 2.7 | 4,972,079 | 0.07983 | python,hdf5,pytables | I had to install libhdf5-8 and libhdf5-serial-dev first.
Then, for me, the command on Ubuntu was:
export HDF5_DIR=/usr/lib/x86_64-linux-gnu/hdf5/serial/ | I am installing Python 2.7 in addition to 2.7. When installing PyTables again for 2.7, I get this error -
Found numpy 1.5.1 package installed.
.. ERROR:: Could not find a local HDF5 installation.
You may need to explicitly state where your local HDF5 headers and
library can be found by setting the HDF5_DIR environment
variable or by using the --hdf5 command-line option.
I am not clear on the HDF installation. I downloaded again - and copied it into a /usr/local/hdf5 directory. And tried to set the environement vars as suggested in the PyTable install. Has anyone else had this problem that could help? | 0 | 1 | 6,903 |
0 | 13,868,984 | 0 | 1 | 0 | 0 | 4 | false | 7 | 2011-02-11T17:23:00.000 | 4 | 5 | 0 | Unable to reinstall PyTables for Python 2.7 | 4,972,079 | 0.158649 | python,hdf5,pytables | My HDF5 was installed with homebrew, so setting the environment variable as follows worked for me: HDF5_DIR=/usr/local/Cellar/hdf5/1.8.9 | I am installing Python 2.7 in addition to 2.7. When installing PyTables again for 2.7, I get this error -
Found numpy 1.5.1 package installed.
.. ERROR:: Could not find a local HDF5 installation.
You may need to explicitly state where your local HDF5 headers and
library can be found by setting the HDF5_DIR environment
variable or by using the --hdf5 command-line option.
I am not clear on the HDF installation. I downloaded again - and copied it into a /usr/local/hdf5 directory. And tried to set the environement vars as suggested in the PyTable install. Has anyone else had this problem that could help? | 0 | 1 | 6,903 |
0 | 4,992,253 | 0 | 1 | 0 | 0 | 4 | true | 7 | 2011-02-11T17:23:00.000 | 4 | 5 | 0 | Unable to reinstall PyTables for Python 2.7 | 4,972,079 | 1.2 | python,hdf5,pytables | The hdf5 command line option was not stated correctly ( --hdf5='/usr/local/hdf5' ). Sprinkling print statements in the setup.py made it easier to pin down the problem. | I am installing Python 2.7 in addition to 2.7. When installing PyTables again for 2.7, I get this error -
Found numpy 1.5.1 package installed.
.. ERROR:: Could not find a local HDF5 installation.
You may need to explicitly state where your local HDF5 headers and
library can be found by setting the HDF5_DIR environment
variable or by using the --hdf5 command-line option.
I am not clear on the HDF installation. I downloaded again - and copied it into a /usr/local/hdf5 directory. And tried to set the environement vars as suggested in the PyTable install. Has anyone else had this problem that could help? | 0 | 1 | 6,903 |
0 | 4,989,194 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2011-02-14T04:31:00.000 | 1 | 1 | 0 | Lattice Multiplication with threads, is it more efficient? | 4,988,830 | 1.2 | java,python,c,multiplication | If I understand you correctly, "lattice multiplication" is different way of doing base-10 multiplication by hand that is supposed to be easier for kids to understand than the classic way. I assume "common multiplication" is the classic way.
So really, I think that the best answer is:
Neither "lattice multiplication" or "common multiplication" are good (efficient) ways of doing multiplication on a computer. For small numbers (up to 2**64), built-in hardware multiplication is better. For large numbers, you are best of breaking the numbers into 8 or 32 bit chunks ...
Multi-threading is unlikely to speed up multiplication unless you have very large numbers. The inherent cost of creating (or recycling) a thread is likely to swamp any theoretical speedup for smaller numbers. And for larger numbers (and larger numbers of threads) you need to worry about the bandwidth of copying the data around.
Note there is a bit of material around on parallel multiplication (Google), but it is mostly in the academic literature ... which maybe says something about how practical it really is for the kind of hardware used today for low and high end computing. | which one is faster:
Using Lattice Multiplication with threads(big numbers) OR
Using common Multiplication with threads(big numbers)
Do you know any source code, to test them?
-----------------EDIT------------------
The theads should be implemented in C, or Java for testing | 0 | 1 | 519 |
0 | 5,022,089 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2011-02-16T20:48:00.000 | 3 | 4 | 0 | large set of data, interpolation | 5,021,921 | 0.148885 | python,numpy,scipy,data-fitting | Except, a spline does not give you a "formula", at least not unless you have the wherewithal to deal with all of the piecewise segments. Even then, it will not be easily written down, or give you anything that is at all pretty to look at.
A simple spline gives you an interpolant. Worse, for 3000 points, an interpolating spline will give you roughly that many cubic segments! You did say interpolation before. OF course, an interpolating polynomial of that high an order will be complete crapola anyway, so don't think you can just go back there.
If all that you need is a tool that can provide an exact interpolation at any point, and you really don't need to have an explicit formula, then an interpolating spline is a good choice.
Or do you really want an approximant? A function that will APPROXIMATELY fit your data, smoothing out any noise? The fact is, a lot of the time when people who have no idea what they are doing say "interpolation" they really do mean approximation, smoothing. This is possible of course, but there are entire books written on the subject of curve fitting, the modeling of empirical data. You first goal is then to choose an intelligent model, that will represent this data. Best of course is if you have some intelligent choice of model from physical understanding of the relationship under study, then you can estimate the parameters of that model using a nonlinear regression scheme, of which there are many to be found.
If you have no model, and are unwilling to choose one that roughly has the proper shape, then you are left with generic models in the form of splines, which can be fit in a regression sense, or with high order polynomial models, for which I have little respect.
My point in all of this is YOU need to make some choices and do some research on a choice of model. | I am looking for a "method" to get a formula, formula which comes from fitting a set of data (3000 point). I was using Legendre polynomial, but for > 20 points it gives not exact values. I can write chi2 test, but algorithm needs a loot of time to calculate N parameters, and at the beginning I don't know how the function looks like, so it takes time. I was thinking about splines... Maybe ...
So the input is: 3000 pints
Output : f(x) = ... something
I want to have a formula from fit. What is a best way to do this in python?
Let the force would be with us!
Nykon | 0 | 1 | 2,988 |
0 | 5,022,008 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2011-02-16T20:48:00.000 | 0 | 4 | 0 | large set of data, interpolation | 5,021,921 | 0 | python,numpy,scipy,data-fitting | The only formula would be a polynomial of order 3000.
How good does the fit need to be? What type of formula do you expect? | I am looking for a "method" to get a formula, formula which comes from fitting a set of data (3000 point). I was using Legendre polynomial, but for > 20 points it gives not exact values. I can write chi2 test, but algorithm needs a loot of time to calculate N parameters, and at the beginning I don't know how the function looks like, so it takes time. I was thinking about splines... Maybe ...
So the input is: 3000 pints
Output : f(x) = ... something
I want to have a formula from fit. What is a best way to do this in python?
Let the force would be with us!
Nykon | 0 | 1 | 2,988 |
0 | 34,021,509 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2011-02-17T00:27:00.000 | 1 | 3 | 0 | Problem with scipy.optimize.fmin_slsqp when using very large or very small numbers | 5,023,846 | 0.066568 | python,scipy,histogram,gaussian,least-squares | I got in trouble with this issue too, but I solved it in my project. I'm not sure if this is a general solution.
The reason was that the scipy.optimize.fmin_slsqp calculated the gradient by an approximate approach when the argument jac is set by False or default. The gradient produced from the approximate approach was not normalized (with large scale). When calculating the step length, large value of gradient would influence the performance and precision of line search. This might be the reason why we got Positive directional derivative for linesearch.
You can try to implement the closed form of the Jacobian matrix to the object function and pass it to the jac argument. More importantly, you should rescale the value of Jacobian matrix (like normalization) to avoid affecting line search.
Best. | Has anybody ever encountered problems with fmin_slsqp (or anything else in scipy.optimize) only when using very large or very small numbers?
I am working on some python code to take a grayscale image and a mask, generate a histogram, then fit multiple gaussians to the histogram. To develop the code I used a small sample image, and after some work the code was working brilliantly. However, when I normalize the histogram first, generating bin values <<1, or when I histogram huge images, generating bin values in the hundreds of thousands, fmin_slsqp() starts failing sporadically. It quits after only ~5 iterations, usually just returning a slightly modified version of the initial guess I gave it, and returns exit mode 8, which means "Positive directional derivative for linesearch." If I check the size of the bin counts at the beginning and scale them into the neighborhood of ~100-1000, fmin_slsqp() works as usual. I just un-scale things before returning the results. I guess I could leave it like that, but it feels like a hack.
I have looked around and found folks talking about the epsilon value, which is basically the dx used for approximating derivatives, but tweaking that has not helped. Other than that I haven't found anything useful yet. Any ideas would be greatly appreciated. Thanks in advance.
james | 0 | 1 | 5,001 |
0 | 5,690,969 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2011-02-17T00:27:00.000 | 4 | 3 | 0 | Problem with scipy.optimize.fmin_slsqp when using very large or very small numbers | 5,023,846 | 0.26052 | python,scipy,histogram,gaussian,least-squares | Are you updating your initial guess ("x0") when your underlying data changes scale dramatically? for any iterative linear optimization problem, these problems will occur if your initial guess is far from the data you're trying to fit. It's more of a optimization problem than a scipy problem. | Has anybody ever encountered problems with fmin_slsqp (or anything else in scipy.optimize) only when using very large or very small numbers?
I am working on some python code to take a grayscale image and a mask, generate a histogram, then fit multiple gaussians to the histogram. To develop the code I used a small sample image, and after some work the code was working brilliantly. However, when I normalize the histogram first, generating bin values <<1, or when I histogram huge images, generating bin values in the hundreds of thousands, fmin_slsqp() starts failing sporadically. It quits after only ~5 iterations, usually just returning a slightly modified version of the initial guess I gave it, and returns exit mode 8, which means "Positive directional derivative for linesearch." If I check the size of the bin counts at the beginning and scale them into the neighborhood of ~100-1000, fmin_slsqp() works as usual. I just un-scale things before returning the results. I guess I could leave it like that, but it feels like a hack.
I have looked around and found folks talking about the epsilon value, which is basically the dx used for approximating derivatives, but tweaking that has not helped. Other than that I haven't found anything useful yet. Any ideas would be greatly appreciated. Thanks in advance.
james | 0 | 1 | 5,001 |
0 | 8,394,696 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2011-02-17T00:27:00.000 | 5 | 3 | 0 | Problem with scipy.optimize.fmin_slsqp when using very large or very small numbers | 5,023,846 | 0.321513 | python,scipy,histogram,gaussian,least-squares | I've had similar problems optimize.leastsq. The data I need to deal with often are very small, like 1e-18 and such, and I noticed that leastsq doesn't converge to best fit parameters in those cases. Only when I scale the data to something more common (like in hundreds, thousands, etc., something you can maintain resolution and dynamic range with integers), I can let leastsq converge to something very reasonable.
I've been trying around with those optional tolerance parameters so that I don't have to scale data before optimizing, but haven't had much luck with it...
Does anyone know a good general approach to avoid this problem with the functions in the scipy.optimize package? I'd appreciate you could share... I think the root is the same problem with the OP's. | Has anybody ever encountered problems with fmin_slsqp (or anything else in scipy.optimize) only when using very large or very small numbers?
I am working on some python code to take a grayscale image and a mask, generate a histogram, then fit multiple gaussians to the histogram. To develop the code I used a small sample image, and after some work the code was working brilliantly. However, when I normalize the histogram first, generating bin values <<1, or when I histogram huge images, generating bin values in the hundreds of thousands, fmin_slsqp() starts failing sporadically. It quits after only ~5 iterations, usually just returning a slightly modified version of the initial guess I gave it, and returns exit mode 8, which means "Positive directional derivative for linesearch." If I check the size of the bin counts at the beginning and scale them into the neighborhood of ~100-1000, fmin_slsqp() works as usual. I just un-scale things before returning the results. I guess I could leave it like that, but it feels like a hack.
I have looked around and found folks talking about the epsilon value, which is basically the dx used for approximating derivatives, but tweaking that has not helped. Other than that I haven't found anything useful yet. Any ideas would be greatly appreciated. Thanks in advance.
james | 0 | 1 | 5,001 |
0 | 5,040,977 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2011-02-18T10:54:00.000 | 2 | 8 | 0 | How to draw the largest polygon from a set of points | 5,040,412 | 0.049958 | python,matplotlib | From your comments to other answers, you seem to already get the set of points defining the convex hull, but they're not ordered. The easiest way to order them would be to take a point inside the convex hull as the origin of a new coordinate frame. You then transform the (most probably) Cartesian coordinates of your points into polar coordinates, with respect to this new frame. If you order your points with respect to their polar angle coordinate, you can draw your convex hull. This is only valid if the set of your points defined a convex (non-concave) hull. | So, i have a set of points (x,y), and i want to be able to draw the largest polygon with these points as vertices. I can use patches.Polygon() in matplotlib, but this simply draws lines between the points in the order i give them. This doesn't automatically do what i want. As an example, if a want to draw a square, and sort the points by increasing x, and then by increasing y, i won't get a square, but two connecting triangles. (the line "crosses over")
So the problem now is to find a way to sort the list of points such that i "go around the outside" of the polygon when iterating over this list.
Or is there maybe some other functionality in Matplotlib which can do this for me? | 0 | 1 | 5,735 |
0 | 5,085,486 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2011-02-18T17:15:00.000 | 1 | 1 | 0 | nltk.cluster using a sparse representation | 5,044,359 | 0.197375 | python,nltk | It means that when you pass in the input vector, you can either pass in a numpy.array() or a nltk_contrib.unimelb.tacohn.SparseArrays.
I suggest you look at the package nltk_contrib.unimelb.tacohn to find the SparseArrays class. Then try to create your data with this class before passing it into nltk.cluster | I am quite new in Python.
I am trying to use the nltk.cluster package to apply a simple kMeans to a word-document matrix. While it works when the matrix is a list of numpy array-like objects, I wasn't able to make it work for a sparse matrix representation (such as csc_matrix, csr_matrix or lil_matrix).
All the information that I found was:
Note that the vectors must use numpy array-like objects. nltk_contrib.unimelb.tacohn.SparseArrays may be used for efficiency when required
I do not understand what this means. Can anyone help me in this matter?
Thanks in advance! | 0 | 1 | 824 |
0 | 6,637,984 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2011-02-22T22:07:00.000 | 1 | 2 | 0 | Named Entity Recognition from personal Gazetter using Python | 5,084,578 | 0.099668 | python,nlp,nltk,named-entity-recognition | I haven't used NLTK enough recently, but if you have words that you know are skills, you don't need to do NER- just a text search.
Maybe use Lucene or some other search library to find the text, and then annotate it? That's a lot of work but if you are working with a lot of data that might be ok. Alternatively, you could hack together a regex search which will be slower but probably work ok for smaller amounts of data and will be much easier to implement. | I try to do named entity recognition in python using NLTK.
I want to extract personal list of skills.
I have the list of skills and would like to search them in requisition and tag the skills.
I noticed that NLTK has NER tag for predefine tags like Person, Location etc.
Is there a external gazetter tagger in Python I can use?
any idea how to do it more sophisticated than search of terms ( sometimes multi words term )?
Thanks,
Assaf | 0 | 1 | 2,226 |
0 | 5,112,349 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2011-02-25T00:26:00.000 | 1 | 3 | 0 | Python RGB array to HSL and back | 5,112,248 | 0.066568 | python,c,rgb | One option is to use OpenCV. Their Python bindings are pretty good (although not amazing). The upside is that it is a very powerful library, so this would just be the tip of the iceberg.
You could probably also do this very efficiently using numpy. | well i've seen some code to convert RGB to HSL; but how to do it fast in python.
Its strange to me, that for example photoshop does this within a second on a image, while in python this often takes forever. Well at least the code i use; so think i'm using wrong code to do it
In my case my image is a simple but big raw array [r,g,b,r,g,b,r,g,b ....]
I would like this to be [h,s,l,h,s,l,h,s,l .......]
Also i would like to be able to do hsl to rgb
the image is actually 640x 480 pixels;
Would it require some library or wrapper around c code (i never created a wrapper) to get it done fast ? | 0 | 1 | 6,609 |
0 | 21,030,654 | 0 | 0 | 0 | 0 | 1 | false | 19 | 2011-02-26T16:05:00.000 | 0 | 4 | 0 | Can I get a view of a numpy array at specified indexes? (a view from "fancy indexing") | 5,127,991 | 0 | python,numpy | You could theoretically create an object that performs the role of a 'fancy view' into another array, and I can think of plenty of use cases for it. The problem is, that such an object would not be compatible with the standard numpy machinery. All compiled numpy C code relies on data being accessible as an inner product of strides and indices. Generalizing this code to fundamentally different data layout formats would be a gargantuan undertaking. For a project that is trying to take on a challenge along these lines, check out continuum's Blaze. | What i need is a way to get "fancy indexing" (y = x[[0, 5, 21]]) to return a view instead of a copy.
I have an array, but i want to be able to work with a subset of this array (specified by a list of indices) in such a way that the changes in this subset is also put into the right places in the large array. If i just want to do something with the first 10 elements, i can just use regular slicing y = x[0:10]. That works great, because regular slicing returns a view. The problem is if i don't want 0:10, but an arbitrary set of indices.
Is there a way to do this? | 0 | 1 | 3,684 |
0 | 5,136,547 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2011-02-27T22:35:00.000 | 3 | 5 | 0 | python random number | 5,136,533 | 0.119427 | python | Since a dice has 6 possible outcomes, if you get a 2 three times, this would be :
0 3 0 0 0 0 | I was wondering if someone can clarify this line for me.
Create a function die(x) which rolls a
die x times keeping track of how many
times each face comes up and returns a
1X6 array containing these numbers.
I am not sure what this means when it says 1X6 array ? I am using the randint function from numpy so the output is already an array (or list) im not sure.
Thanks | 0 | 1 | 688 |
0 | 5,136,717 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2011-02-27T22:35:00.000 | 1 | 5 | 0 | python random number | 5,136,533 | 0.039979 | python | if you have the results of the die rolls in a list lst, you can determine the number of times a 4 appeared by doing len([_ for _ in lst if _ == 4]). you should be able to figure the rest out from there. | I was wondering if someone can clarify this line for me.
Create a function die(x) which rolls a
die x times keeping track of how many
times each face comes up and returns a
1X6 array containing these numbers.
I am not sure what this means when it says 1X6 array ? I am using the randint function from numpy so the output is already an array (or list) im not sure.
Thanks | 0 | 1 | 688 |
0 | 5,150,092 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2011-03-01T03:03:00.000 | 2 | 1 | 0 | Best text miner to use with python | 5,150,053 | 1.2 | python | I would go with NLTK since it's written in Python | I want to use a easy-to-use text miner with my python code. I will be mainly using the classification algorithms - Naive Bayes, KNN and such.
Please let me know what's the best option here - Weka? NLTK? SVM? or something else?
Thanks! | 0 | 1 | 142 |
0 | 5,159,762 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2011-03-01T14:09:00.000 | 1 | 2 | 0 | creating a 3d numpy array from a non-contigous set of contigous 2d slices | 5,155,597 | 0.099668 | c++,python,numpy,python-c-api | Short answer, no.
Numpy expects all the data to be laid it in a simple, strided pattern. When iterating over the array, to advance in a dimension, it adds a constant, the stride size for that dimension, to the position in memory. So unless your 2-d slices are laid out regularly (e.g. every other row of a larger 3-d array), numpy will need to copy the data.
If you do have that order, you can do what you'll want. You'll need to make a PyArray struct where the data points to the first item, the strides are correct for layout, and the descr is correct as well. Most importantly, you'll want to set the base member to another python object to keep your big chunk of memory alive while this view exists. | Is it possible to use PyArray_NewFromDescr to create numpy array object from a set of contiguous 2d arrays, without copying the data? | 0 | 1 | 444 |
0 | 6,088,759 | 0 | 1 | 0 | 0 | 5 | false | 26 | 2011-03-06T23:52:00.000 | 12 | 11 | 0 | Python vs Matlab | 5,214,369 | 1 | python,matlab,ide | I've been getting on very well with the Spyder IDE in the Python(x,y) distribution. I'm a long term user of Matlab and have known of the existence of Python for 10 years or so but it's only since I installed Python(x,y) that I've started using Python regularly. | I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).
Is there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?
When you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity. | 0 | 1 | 47,020 |
0 | 10,388,279 | 0 | 1 | 0 | 0 | 5 | false | 26 | 2011-03-06T23:52:00.000 | 2 | 11 | 0 | Python vs Matlab | 5,214,369 | 0.036348 | python,matlab,ide | after long long tryouts with many editors,
i have settled for aptana ide + ipython (including notebook in internet browser)
great for editing, getting help easy, try fast new things
aptana is the same as eclipse (because of pydev) but aptana has themes and different little things eclipse lacks
about python a little,
don't forget pandas, as it's (i believe) extremely powerful tool for data analysis
it will be a beast in the future, my opinion
i'm researching matlab, and i see some neat things there, especially gui interfaces and some other nice things
but python gives you flexibility and ease,
anyway, you still have to learn the basics of python, matplotlib, numpy (and eventually pandas)
but from what i see, numpy and matplotlib are similar to matplotlib concepts (probably they were created with matlab in mind, right?) | I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).
Is there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?
When you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity. | 0 | 1 | 47,020 |
0 | 41,910,220 | 0 | 1 | 0 | 0 | 5 | false | 26 | 2011-03-06T23:52:00.000 | 6 | 11 | 0 | Python vs Matlab | 5,214,369 | 1 | python,matlab,ide | I've been in the engineering field for a while now and I've always used MATLAB for high-complexity math calculations. I never really had an major problems with it, but I wasn't super enthusiastic about it either. A few months ago I found out I was going to be a TA for a numerical methods class and that it would be taught using Python, so I would have to learn the language.
What I at first thought would be extra work turned out to be an awesome hobby. I can't even begin to describe how bad MATLAB is compared to Python! What used to to take me all day to code in Matlab takes me only a few hours to write in Python. My code looks infinitely more appealing as well. Python's performance and flexibility really surprised me. With Python I can literally do anything I used to do in MATLAB and I can do it a lot better.
If anyone else is thinking about switching, I suggest you do it. It made my life a lot easier. I'll quote "Python Scripting for Computational Science" because they describe the pros of Python over MATLAB better than I do:
the python programming language is more powerful
the python environment is completely open and made for integration
with external tools,
a complete toolbox/module with lots of functions and classes can be contained in a single file (in contrast to a bunch of M-files),
transferring functions as arguments to functions is simpler,
nested, heterogeneous data structures are simple to construct and use,
object-oriented programming is more convenient,
interfacing C, C++, and fortran code is better supported and therefore simpler,
scalar functions work with array arguments to a larger extent (without modifications of arithmetic operators),
the source is free and runs on more platforms. | I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).
Is there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?
When you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity. | 0 | 1 | 47,020 |
0 | 5,214,892 | 0 | 1 | 0 | 0 | 5 | false | 26 | 2011-03-06T23:52:00.000 | 2 | 11 | 0 | Python vs Matlab | 5,214,369 | 0.036348 | python,matlab,ide | almost everything is covered by others .. i hope you don't need any toolboxes like optimizarion toolbox , neural network etc.. [ I didn't find these for python may be there are some .. i seriously doubt they might be better than Matlab ones..]
if u don't need symbolic manipulation capability and are using windows python(x,y) is the way to go[they don't have much activity on their linux port (older versions are available)]
(or need some minor symbolic manipulations use sympy , i think it comes with EPD and python(x,y) supersedes/integrates EPD)
if you need symbolic capabilities sage is the way to go, IMHO sage stands up good with Matlab as well as Mathematica ..
i'm also trying to make a switch ...(need for my engg projs)
i hope it helps .. | I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).
Is there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?
When you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity. | 0 | 1 | 47,020 |
0 | 9,317,942 | 0 | 1 | 0 | 0 | 5 | false | 26 | 2011-03-06T23:52:00.000 | 1 | 11 | 0 | Python vs Matlab | 5,214,369 | 0.01818 | python,matlab,ide | I have recently switched from MATLAB to Python (I am about 2 months into the transition), and am getting on fairly well using Sublime Text 2, using the SublimeRope and SublimeLinter plugins to provide some IDE-like capabilities, as well as pudb to provide some graphical interactive debugging capabilities.
I have not yet explored profilers or variable editors. (I never really used the MATLAB variable editor, anyway). | I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).
Is there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?
When you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity. | 0 | 1 | 47,020 |
0 | 7,501,276 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2011-03-07T19:42:00.000 | 6 | 2 | 0 | Differences between python's numpy.ndarray and list datatypes | 5,224,420 | 1 | python,arrays,list,performance,numpy | There are several differences:
You can append elements to a list, but you can't change the size of a
´numpy.ndarray´ without making a full copy.
Lists can containt about everything, in numpy arrays all the
elements must have the same type.
In practice, numpy arrays are faster for vectorial functions than
mapping functions to lists.
I think than modification times is not an issue, but iteration over
the elements is.
Numpy arrays have many array related methods (´argmin´, ´min´, ´sort´,
etc).
I prefer to use numpy arrays when I need to do some mathematical operations (sum, average, array multiplication, etc) and list when I need to iterate in 'items' (strings, files, etc). | What are the differences between python's numpy.ndarray and list datatypes? I have vague ideas, but would like to get a definitive answer about:
Size in memory
Speed / order of access
Speed / order of modification in place but preserving length
Effects of changing length
Thanks! | 0 | 1 | 5,003 |
0 | 5,231,844 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2011-03-08T11:09:00.000 | 5 | 1 | 0 | Calculating point-wise mutual information (PMI) score for n-grams in Python | 5,231,627 | 1.2 | python,n-gram | If I'm understanding your problem correctly, you want to compute things like log { P("x1 x2 x3 x4 x5") / P("x1") P("x2") ... P("x5") } where P measures the probability that any given 5-gram or 1-gram is a given thing (and is basically a ratio of counts, perhaps with Laplace-style offsets). So, make a single pass through your corpus and store counts of (1) each 1-gram, (2) each n-gram (use a dict for the latter), and then for each external n-gram you do a few dict lookups, a bit of arithmetic, and you're done. One pass through the corpus at the start, then a fixed amount of work per external n-gram.
(Note: Actually I'm not sure how one defines PMI for more than two random variables; perhaps it's something like log P(a)P(b)P(c)P(abc) / P(ab)P(bc)P(a_c). But if it's anything at all along those lines, you can do it the same way: iterate through your corpus counting lots of things, and then all the probabilities you need are simply ratios of the counts, perhaps with Laplace-ish corrections.)
If your corpus is so big that you can't fit the n-gram dict in memory, then divide it into kinda-memory-sized chunks, compute n-gram dicts for each chunk and store them on disc in a form that lets you get at any given n-gram's entry reasonably efficiently; then, for each extern n-gram, go through the chunks and add up the counts.
What form? Up to you. One simple option: in lexicographic order of the n-gram (note: if you're working with words rather than letters, you may want to begin by turning words into numbers; you'll want a single preliminary pass over your corpus to do this); then finding the n-gram you want is a binary search or something of the kind, which with chunks 1GB in size would mean somewhere on the order of 15-20 seeks per chunk; you could add some extra indexing to reduce this. Or: use a hash table on disc, with Berkeley DB or something; in that case you can forgo the chunking. Or, if the alphabet is small (e.g., these are letter n-grams rather than word n-grams and you're processing plain English text), just store them in a big array, with direct lookup -- but in that case, you can probably fit the whole thing in memory anyway. | I have a large corpus of n-grams and several external n-grams. I want to calculate the PMI score of each external n-gram based on this corpus (the counts).
Are there any tools to do this or can someone provide me with a piece of code in Python that can do this?
The problem is that my n-grams are 2-grams, 3-grams, 4-grams, and 5-grams. So calculating probabilities for 3-grams and more are really time-consuming. | 0 | 1 | 4,572 |
0 | 5,268,722 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2011-03-11T02:39:00.000 | 3 | 3 | 0 | Pipe numpy data in Linux? | 5,268,391 | 0.197375 | python,linux,ubuntu,numpy,pipe | Check out the save and load functions. I don't think they would object to being passed a pipe instead of a file. | Is it possible to pipe numpy data (from one python script ) into the other?
suppose that script1.py looks like this:
x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})
print x
Suppose that from the linux command, I run the following:
python script1.py | script2.py
Will script2.py get the piped numpy data as an input (stdin)? will the data still be in the same format of numpy? (so that I can, for example, perform numpy operations on it from within script2.py)? | 0 | 1 | 751 |
0 | 5,268,401 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2011-03-11T02:39:00.000 | 2 | 3 | 0 | Pipe numpy data in Linux? | 5,268,391 | 1.2 | python,linux,ubuntu,numpy,pipe | No, data is passed through a pipe as text. You'll need to serialize the data in script1.py before writing, and deserialize it in script2.py after reading. | Is it possible to pipe numpy data (from one python script ) into the other?
suppose that script1.py looks like this:
x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})
print x
Suppose that from the linux command, I run the following:
python script1.py | script2.py
Will script2.py get the piped numpy data as an input (stdin)? will the data still be in the same format of numpy? (so that I can, for example, perform numpy operations on it from within script2.py)? | 0 | 1 | 751 |
0 | 5,299,434 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2011-03-14T13:35:00.000 | 1 | 3 | 0 | Program, that chooses the best out of 10 | 5,299,280 | 0.066568 | python,algorithm,variations | I'd solve this exercise with dynamic programming. You should be able to get the optimal solution in O(m*n) operations (n beeing the number of cars, m beeing the total mass).
That will only work however if the masses are all integers.
In general you have a binary linear programming problem. Those are very hard in general (NP-complete).
However, both ways lead to algorithms which I wouldn't consider to be beginners material. You might be better of with trial and error (as you suggested) or simply try every possible combination. | I need to make a program in python that chooses cars from an array, that is filled with the mass of 10 cars. The idea is that it fills a barge, that can hold ~8 tonnes most effectively and minimum space is left unfilled. My idea is, that it makes variations of the masses and chooses one, that is closest to the max weight. But since I'm new to algorithms, I don't have a clue how to do it | 0 | 1 | 230 |
0 | 5,330,143 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2011-03-16T18:30:00.000 | 2 | 4 | 0 | Dictionary-like efficient storing of scipy/numpy arrays | 5,330,010 | 0.099668 | python,serialization,numpy,scipy | I would suggest to use scipy.save and have an dictionnary between the word and the name of the files. | BACKGROUND
The issue I'm working with is as follows:
Within the context of an experiment I am designing for my research, I produce a large number of large (length 4M) arrays which are somewhat sparse, and thereby could be stored as scipy.sparse.lil_matrix instances, or simply as scipy.array instances (the space gain/loss isn't the issue here).
Each of these arrays must be paired with a string (namely a word) for the data to make sense, as they are semantic vectors representing the meaning of that string. I need to preserve this pairing.
The vectors for each word in a list are built one-by-one, and stored to disk before moving on to the next word.
They must be stored to disk in a manner which could be then retrieved with dictionary-like syntax. For example if all the words are stored in a DB-like file, I need to be able to open this file and do things like vector = wordDB[word].
CURRENT APPROACH
What I'm currently doing:
Using shelve to open a shelf named wordDB
Each time the vector (currently using lil_matrix from scipy.sparse) for a word is built, storing the vector in the shelf: wordDB[word] = vector
When I need to use the vectors during the evaluation, I'll do the reverse: open the shelf, and then recall vectors by doing vector = wordDB[word] for each word, as they are needed, so that not all the vectors need be held in RAM (which would be impossible).
The above 'solution' fits my needs in terms of solving the problem as specified. The issue is simply that when I wish to use this method to build and store vectors for a large amount of words, I simply run out of disk space.
This is, as far as I can tell, because shelve pickles the data being stored, which is not an efficient way of storing large arrays, thus rendering this storage problem intractable with shelve for the number of words I need to deal with.
PROBLEM
The question is thus: is there a way of serializing my set of arrays which will:
Save the arrays themselves in compressed binary format akin to the .npy files generated by scipy.save?
Meet my requirement that the data be readable from disk as a dictionary, maintaining the association between words and arrays? | 0 | 1 | 3,381 |
0 | 5,357,196 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2011-03-18T19:55:00.000 | 6 | 11 | 0 | how to generate all possible combinations of a 14x10 matrix containing only 1's and 0's | 5,357,171 | 1 | python,arrays,matlab,matrix | Generating Every possible matrix of 1's and 0's for 14*10 would generate 2**140 matrixes. I don't believe you would have enough lifetime for this. I don't know, if the sun would still shine before you finish that. This is why it is impossible to generate all those matrices. You must look for some other solution, this looks like a brute force. | I'm working on a problem and one solution would require an input of every 14x10 matrix that is possible to be made up of 1's and 0's... how can I generate these so that I can input every possible 14x10 matrix into another function? Thank you!
Added March 21: It looks like I didn't word my post appropriately. Sorry. What I'm trying to do is optimize the output of 10 different production units (given different speeds and amounts of downtime) for several scenarios. My goal is to place blocks of downtime to minimized the differences in production on a day-to-day basis. The amount of downtime and frequency each unit is allowed is given. I am currently trying to evaluate a three week cycle, meaning every three weeks each production unit is taken down for a given amount of hours. I was asking the computer to determine the order the units would be taken down based on the constraint that the lines come down only once every 3 weeks and the difference in daily production is the smallest possible. My first approach was to use Excel (as I tried to describe above) and it didn't work (no suprise there)... where 1- running, 0- off and when these are summed to calculate production. The calculated production is subtracted from a set max daily production. Then, these differences were compared going from Mon-Tues, Tues-Wed, etc for a three week time frame and minimized using solver. My next approach was to write a Matlab code where the input was a tolerance (set allowed variation day-to-day). Is there a program that already does this or an approach to do this easiest? It seems simple enough, but I'm still thinking through the different ways to go about this. Any insight would be much appreciated. | 0 | 1 | 3,156 |
0 | 5,357,340 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2011-03-18T19:55:00.000 | 0 | 11 | 0 | how to generate all possible combinations of a 14x10 matrix containing only 1's and 0's | 5,357,171 | 0 | python,arrays,matlab,matrix | Instead of just suggesting the this is unfeasible, I would suggest considering a scheme that samples the important subset of all possible combinations instead of applying a brute force approach. As one of your replies suggested, you are doing minimization. There are numerical techniques to do this such as simulated annealing, monte carlo sampling as well as traditional minimization algorithms. You might want to look into whether one is appropriate in your case. | I'm working on a problem and one solution would require an input of every 14x10 matrix that is possible to be made up of 1's and 0's... how can I generate these so that I can input every possible 14x10 matrix into another function? Thank you!
Added March 21: It looks like I didn't word my post appropriately. Sorry. What I'm trying to do is optimize the output of 10 different production units (given different speeds and amounts of downtime) for several scenarios. My goal is to place blocks of downtime to minimized the differences in production on a day-to-day basis. The amount of downtime and frequency each unit is allowed is given. I am currently trying to evaluate a three week cycle, meaning every three weeks each production unit is taken down for a given amount of hours. I was asking the computer to determine the order the units would be taken down based on the constraint that the lines come down only once every 3 weeks and the difference in daily production is the smallest possible. My first approach was to use Excel (as I tried to describe above) and it didn't work (no suprise there)... where 1- running, 0- off and when these are summed to calculate production. The calculated production is subtracted from a set max daily production. Then, these differences were compared going from Mon-Tues, Tues-Wed, etc for a three week time frame and minimized using solver. My next approach was to write a Matlab code where the input was a tolerance (set allowed variation day-to-day). Is there a program that already does this or an approach to do this easiest? It seems simple enough, but I'm still thinking through the different ways to go about this. Any insight would be much appreciated. | 0 | 1 | 3,156 |
0 | 5,357,209 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2011-03-18T19:55:00.000 | 0 | 11 | 0 | how to generate all possible combinations of a 14x10 matrix containing only 1's and 0's | 5,357,171 | 0 | python,arrays,matlab,matrix | Are you saying that you have a table with 140 cells and each value can be 1 or 0 and you'd like to generate every possible output? If so, you would have 2^140 possible combinations...which is quite a large number. | I'm working on a problem and one solution would require an input of every 14x10 matrix that is possible to be made up of 1's and 0's... how can I generate these so that I can input every possible 14x10 matrix into another function? Thank you!
Added March 21: It looks like I didn't word my post appropriately. Sorry. What I'm trying to do is optimize the output of 10 different production units (given different speeds and amounts of downtime) for several scenarios. My goal is to place blocks of downtime to minimized the differences in production on a day-to-day basis. The amount of downtime and frequency each unit is allowed is given. I am currently trying to evaluate a three week cycle, meaning every three weeks each production unit is taken down for a given amount of hours. I was asking the computer to determine the order the units would be taken down based on the constraint that the lines come down only once every 3 weeks and the difference in daily production is the smallest possible. My first approach was to use Excel (as I tried to describe above) and it didn't work (no suprise there)... where 1- running, 0- off and when these are summed to calculate production. The calculated production is subtracted from a set max daily production. Then, these differences were compared going from Mon-Tues, Tues-Wed, etc for a three week time frame and minimized using solver. My next approach was to write a Matlab code where the input was a tolerance (set allowed variation day-to-day). Is there a program that already does this or an approach to do this easiest? It seems simple enough, but I'm still thinking through the different ways to go about this. Any insight would be much appreciated. | 0 | 1 | 3,156 |
0 | 58,834,043 | 0 | 1 | 0 | 0 | 2 | false | 475 | 2011-03-19T18:39:00.000 | 1 | 14 | 0 | Reloading submodules in IPython | 5,364,050 | 0.014285 | python,ipython | Note that the above mentioned autoreload only works in IntelliJ if you manually save the changed file (e.g. using ctrl+s or cmd+s). It doesn't seem to work with auto-saving. | Currently I am working on a python project that contains sub modules and uses numpy/scipy. Ipython is used as interactive console. Unfortunately I am not very happy with workflow that I am using right now, I would appreciate some advice.
In IPython, the framework is loaded by a simple import command. However, it is often necessary to change code in one of the submodules of the framework. At this point a model is already loaded and I use IPython to interact with it.
Now, the framework contains many modules that depend on each other, i.e. when the framework is initially loaded the main module is importing and configuring the submodules. The changes to the code are only executed if the module is reloaded using reload(main_mod.sub_mod). This is cumbersome as I need to reload all changed modules individually using the full path. It would be very convenient if reload(main_module) would also reload all sub modules, but without reloading numpy/scipy.. | 0 | 1 | 217,455 |
0 | 57,911,718 | 0 | 1 | 0 | 0 | 2 | false | 475 | 2011-03-19T18:39:00.000 | 0 | 14 | 0 | Reloading submodules in IPython | 5,364,050 | 0 | python,ipython | Any subobjects will not be reloaded by this, I believe you have to use IPython's deepreload for that. | Currently I am working on a python project that contains sub modules and uses numpy/scipy. Ipython is used as interactive console. Unfortunately I am not very happy with workflow that I am using right now, I would appreciate some advice.
In IPython, the framework is loaded by a simple import command. However, it is often necessary to change code in one of the submodules of the framework. At this point a model is already loaded and I use IPython to interact with it.
Now, the framework contains many modules that depend on each other, i.e. when the framework is initially loaded the main module is importing and configuring the submodules. The changes to the code are only executed if the module is reloaded using reload(main_mod.sub_mod). This is cumbersome as I need to reload all changed modules individually using the full path. It would be very convenient if reload(main_module) would also reload all sub modules, but without reloading numpy/scipy.. | 0 | 1 | 217,455 |
0 | 5,377,789 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2011-03-21T12:07:00.000 | 0 | 2 | 0 | how to calculate trendline for stock charts | 5,377,347 | 0 | python,stocks | You could think about using a method that calculates the concave hull of your data. There are probably existing python implementations you could find. This will give you the boundary that encloses your timeseries. If there are outliers in your dataset that you wish to exclude, you could apply some sort of filter or smoothing to your data before you calculate the concave hull. I'm not 100% sure what you mean by "limit the number of touch points" and "find the relevant interval", but hopefully this will get you started. | I have read the topic: How do I calculate a trendline for a graph?
What I am looking for though is how to find the line that touches the outer extreme points of a graph. The intended use is calculation of support, resistance lines for stock charts. So it is not a merely simple regression but it should also limit the number of touch points and there should be a way to find the relevant interval. | 0 | 1 | 2,927 |
0 | 5,392,702 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2011-03-22T14:08:00.000 | -2 | 4 | 0 | access numpy array from a functional language | 5,392,551 | -0.099668 | python,haskell,functional-programming,numpy,scipy | I can't imagine trying to use numpy through haskell or scheme will be easier than just writing functional python. Try using itertools and functools if you want a more functional flavored python. | My primary language is Python. Often when I need to do some cpu heavy task on a numpy array I use scipy.weave.inline to hook up c++ with great results.
I suspect many of the algorithms (machine learning stuff) can however be written simpler in a functional language (scheme, haskell...).
I was thinking. Is it possible to access numpy array data (read and write) from a functional language instead of having to use c++? | 0 | 1 | 887 |
0 | 5,397,912 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2011-03-22T21:04:00.000 | 1 | 3 | 0 | Python - Best data structure for incredibly large matrix | 5,397,809 | 0.066568 | python,data-structures,vector,matrix,large-data-volumes | Use a sparse matrix assuming most entries are 0. | I need to create about 2 million vectors w/ 1000 slots in each (each slot merely contains an integer).
What would be the best data structure for working with this amount of data? It could be that I'm over-estimating the amount of processing/memory involved.
I need to iterate over a collection of files (about 34.5GB in total) and update the vectors each time one of the the 2-million items (each corresponding to a vector) is encountered on a line.
I could easily write code for this, but I know it wouldn't be optimal enough to handle the volume of the data, which is why I'm asking you experts. :)
Best,
Georgina | 0 | 1 | 1,730 |
0 | 5,627,073 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2011-03-28T19:19:00.000 | 0 | 1 | 0 | transforming coordinates in matplotlib | 5,463,924 | 1.2 | python,numpy,matplotlib,scipy | If you simply use matplotlib's plot function, the plot will fit into one online window, so you don't really need to 'rescale' explicitly. Linearly rescaling is pretty easy, if you include some code sample to show your formatting of the data, somebody can help you in translating the origin and scaling the coordinates. | I'm trying to plot a series of rectangles and lines based on a tab delimited text file in matplotlib. The coordinates are quite large in the data and shown be drawn to scale -- except scaled down by some factor X -- in matplotlib.
What's the easiest way to do this in matplotlib? I know that there are transformations, but I am not sure how to define my own transformation (i.e. where the origin is and what the scale factor is) in matplotlib and have it easily convert between "data space" and "plot space". Can someone please show a quick example or point me to the right place? | 0 | 1 | 1,909 |
0 | 5,466,008 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2011-03-28T22:54:00.000 | 2 | 2 | 0 | Run Fortran command line program within Python | 5,465,982 | 0.197375 | python,numpy,scipy,fortran77 | Can't you just dump the data generated by the Fortran program to a file and then read it from python ?
Numpy can read a binary file and treat it as a array.
Going from here to matplotlib then should be a breeeze. | So I am in a bit of a pickle. I am trying to write plotting and fitting extensions to a Fortran77 (why this program was rewritten in F77 is a mystery too me, btw) code that requires command line input, i.e. it prompts the user for input. Currently the program uses GNUplot to plot, but the GNUplot fitting routine is less than ideal in my eyes, and calling GNUplot from Fortran is a pain in the ass to say the least.
I have mostly been working with Numpy, Scipy and Matplotlib to satisfy my fitting and plotting needs. I was wondering if there is a way to call the F77 program in Python and then have it run like I would any other F77 program until the portion where I need it to fit and spit out some nice plots (none of this GNUplot stuff).
I know about F2PY, but I have heard mixed things about it. I have also contemplated using pyexpect and go from there, but I have have bad experience with the way it handles changing expected prompts on the screen (or I am just using it incorrectly).
Thanks for any info on this. | 0 | 1 | 3,047 |
0 | 5,469,268 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2011-03-29T06:53:00.000 | 1 | 2 | 0 | Reading and graphing data read from huge files | 5,468,921 | 0.099668 | python,r,numpy,large-files,graphing | I think python+Numpy would be the most efficient way, regarding speed and ease of implementation.
Numpy is highly optimized so the performance is decent, and python would ease up the algorithm implementation part.
This combo should work well for your case, providing you optimize the loading of the file on memory, try to find the middle point between processing a data block that isn't too large but large enough to minimize the read and write cycles, because this is what will slow down the program
If you feel that this needs more speeding up (which i sincerely doubt), you could use Cython to speed up the sluggish parts. | We have pretty large files, the order of 1-1.5 GB combined (mostly log files) with raw data that is easily parseable to a csv, which is subsequently supposed to be graphed to generate a set of graph images.
Currently, we are using bash scripts to turn the raw data into a csv file, with just the numbers that need to be graphed, and then feeding it into a gnuplot script. But this process is extremely slow. I tried to speed up the bash scripts by replacing some piped cuts, trs etc. with a single awk command, although this improved the speed, the whole thing is still very slow.
So, I am starting to believe there are better tools for this process. I am currently looking to rewrite this process in python+numpy or R. A friend of mine suggested using the JVM, and if I am to do that, I will use clojure, but am not sure how the JVM will perform.
I don't have much experience in dealing with these kind of problems, so any advice on how to proceed would be great. Thanks.
Edit: Also, I will want to store (to disk) the generated intermediate data, i.e., the csv, so I don't have to re-generate it, should I choose I want a different looking graph.
Edit 2: The raw data files have one record per one line, whose fields are separated by a delimiter (|). Not all fields are numbers. Each field I need in the output csv is obtained by applying a certain formula on the input records, which may use multiple fields from the input data. The output csv will have 3-4 fields per line, and I need graphs that plot 1-2, 1-3, 1-4 fields in a (may be) bar chart. I hope that gives a better picture.
Edit 3: I have modified @adirau's script a little and it seems to be working pretty well. I have come far enough that I am reading data, sending to a pool of processor threads (pseudo processing, append thread name to data), and aggregating it into an output file, through another collector thread.
PS: I am not sure about the tagging of this question, feel free to correct it. | 0 | 1 | 2,960 |
0 | 5,494,360 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2011-03-30T23:10:00.000 | 0 | 2 | 0 | Wordnet selectional restrictions in NLTK | 5,493,565 | 0 | python,nlp,nltk,wordnet | You could try using some of the similarity functions with handpicked synsets, and use that to filter. But it's essentially the same as following the hypernym tree - afaik all the wordnet similarity functions use hypernym distance in their calculations. Also, there's a lot of optional attributes of a synset that might be worth exploring, but their presence can be very inconsistent. | Is there a way to capture WordNet selectional restrictions (such as +animate, +human, etc.) from synsets through NLTK?
Or is there any other way of providing semantic information about synset? The closest I could get to it were hypernym relations. | 0 | 1 | 820 |
0 | 5,527,011 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2011-04-03T01:29:00.000 | 2 | 2 | 0 | Can you pickle methods python method objects using any pickle module? | 5,526,981 | 1.2 | python,pickle | You can pickle plain functions. Python stores the name and module and uses that to reload it. If you want to do the same with methods and such that's a little bit trickier. It can be done by breaking such objects down into component parts and pickling those. | I've got a long-running script, and the operations that it performs can be broken down into independent, but order-sensitive function calls. In addition, the pattern here is a bread-first walk over a graph, and restarting a depth after a crash is not a happy option.
So then, my idea is as follows: as the script runs, maintain a list of function-argument pairs. This list corresponds to the open list of a depth-first search of the graph. The data is readily accessible, so I don't need to be concerned about losing it.
In the (probably unavoidable) event of a crash due to network conditions or an error, the list of function argument pairs is enough to resume the operation.
So then, can I store the functions? My intuition says no, but I'd like some advice before I judge either way. | 0 | 1 | 852 |
0 | 5,526,985 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2011-04-03T01:29:00.000 | 0 | 2 | 0 | Can you pickle methods python method objects using any pickle module? | 5,526,981 | 0 | python,pickle | If the arguments are all that matters, why not store the arguments? Just store it as a straight tuple. | I've got a long-running script, and the operations that it performs can be broken down into independent, but order-sensitive function calls. In addition, the pattern here is a bread-first walk over a graph, and restarting a depth after a crash is not a happy option.
So then, my idea is as follows: as the script runs, maintain a list of function-argument pairs. This list corresponds to the open list of a depth-first search of the graph. The data is readily accessible, so I don't need to be concerned about losing it.
In the (probably unavoidable) event of a crash due to network conditions or an error, the list of function argument pairs is enough to resume the operation.
So then, can I store the functions? My intuition says no, but I'd like some advice before I judge either way. | 0 | 1 | 852 |
0 | 9,875,395 | 0 | 0 | 0 | 0 | 3 | false | 213 | 2011-04-03T12:39:00.000 | 17 | 8 | 0 | Is it possible to specify your own distance function using scikit-learn K-Means Clustering? | 5,529,625 | 1 | python,machine-learning,cluster-analysis,k-means,scikit-learn | Yes you can use a difference metric function; however, by definition, the k-means clustering algorithm relies on the eucldiean distance from the mean of each cluster.
You could use a different metric, so even though you are still calculating the mean you could use something like the mahalnobis distance. | Is it possible to specify your own distance function using scikit-learn K-Means Clustering? | 0 | 1 | 105,962 |
0 | 5,531,148 | 0 | 0 | 0 | 0 | 3 | false | 213 | 2011-04-03T12:39:00.000 | 57 | 8 | 0 | Is it possible to specify your own distance function using scikit-learn K-Means Clustering? | 5,529,625 | 1 | python,machine-learning,cluster-analysis,k-means,scikit-learn | Unfortunately no: scikit-learn current implementation of k-means only uses Euclidean distances.
It is not trivial to extend k-means to other distances and denis' answer above is not the correct way to implement k-means for other metrics. | Is it possible to specify your own distance function using scikit-learn K-Means Clustering? | 0 | 1 | 105,962 |
0 | 56,324,541 | 0 | 0 | 0 | 0 | 3 | false | 213 | 2011-04-03T12:39:00.000 | 4 | 8 | 0 | Is it possible to specify your own distance function using scikit-learn K-Means Clustering? | 5,529,625 | 0.099668 | python,machine-learning,cluster-analysis,k-means,scikit-learn | Sklearn Kmeans uses the Euclidean distance. It has no metric parameter. This said, if you're clustering time series, you can use the tslearn python package, when you can specify a metric (dtw, softdtw, euclidean). | Is it possible to specify your own distance function using scikit-learn K-Means Clustering? | 0 | 1 | 105,962 |
0 | 5,555,313 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2011-04-05T13:30:00.000 | 2 | 2 | 0 | plotting a 2D matrix in python, code and most useful visualization | 5,552,641 | 0.197375 | python,numpy,matplotlib,plot,visualization | The key thing to consider is whether you have important structure along both dimensions in the matrix. If you do then it's worth trying a colored matrix plot (e.g., imshow), but if your ten topics are basically independent, you're probably better off doing ten individual line or histogram plots. Both plots have advantages and disadvantages.
In particular, in full matrix plots, the z-axis color values are not very precise or quantitative, so its difficult to see, for example, small ripples on a trend, or quantitative assessments of rates of change, etc, so there's a significant cost to these. And they are also more difficult to pan and zoom since one can get lost and therefore not examine the entire plot, whereas panning along a 1D plot is trivial.
Also, of course, as others have mentioned, 50K points is too many to actually visualize, so you'll need to sort them, or something, to reduce the number of values that you'll actually need to visually assess.
In practice though, finding a good visualizing technique for a given data set is not always trivial, and for large and complex data sets, people try everything that has a chance of being helpful, and then choose what actually helps. | I have a very large matrix(10x55678) in "numpy" matrix format. the rows of this matrix correspond to some "topics" and the columns correspond to words(unique words from a text corpus). Each entry i,j in this matrix is a probability, meaning that word j belongs to topic i with probability x. since I am using ids rather than the real words and since the dimension of my matrix is really large I need to visualized it in a way.Which visualization do you suggest? a simple plot? or a more sophisticated and informative one?(i am asking these cause I am ignorant about the useful types of visualization). If possible can you give me an example that using a numpy matrix? thanks
the reason I asked this question is that I want to have a general view of the word-topic distributions in my corpus. any other methods are welcome | 0 | 1 | 30,834 |
0 | 5,556,151 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2011-04-05T17:31:00.000 | 1 | 2 | 0 | Finding an object within an image containing many objects (Python) | 5,555,975 | 0.099668 | python,algorithm,image-processing | I guess the most straightforward way to achieve this is to compute the correlation map of the two images. Just convolve the two images using a scientific library such as scipy, apply a low pass filter and find the maximum value of the result.
You should check out the following packages:
numpy
scipy
matplotlib
PIL if your images are not in png format | I have to create a python image processing program which reads in two images, one containing a single object and the other containing several objects. However, the first images object is present in the second image but is surrounded by other objects (some similar).
The images are both the same size but I am having problems in finding a method of comparing the images, picking out the matching object and then also placing a cross, or pointer of some sort on top of the object which is present in both images.
The Program should therefore open up both images originally needing to be compared, then after the comparison has taken place the image containing many objects should be displayed but with a pointer on the object most similar (matching) the object in the first image. | 0 | 1 | 711 |
0 | 5,715,781 | 0 | 0 | 0 | 0 | 3 | true | 13 | 2011-04-06T08:51:00.000 | 0 | 5 | 0 | Check for positive definiteness or positive semidefiniteness | 5,563,743 | 1.2 | python,math,matrix,scipy,linear-algebra | One good solution is to calculate all the minors of determinants and check they are all non negatives. | I want to check if a matrix is positive definite or positive semidefinite using Python.
How can I do that? Is there a dedicated function in SciPy for that or in other modules? | 0 | 1 | 19,269 |
0 | 5,565,951 | 0 | 0 | 0 | 0 | 3 | false | 13 | 2011-04-06T08:51:00.000 | 0 | 5 | 0 | Check for positive definiteness or positive semidefiniteness | 5,563,743 | 0 | python,math,matrix,scipy,linear-algebra | an easier method is to calculate the determinants of the minors for this matrx. | I want to check if a matrix is positive definite or positive semidefinite using Python.
How can I do that? Is there a dedicated function in SciPy for that or in other modules? | 0 | 1 | 19,269 |
0 | 5,563,883 | 0 | 0 | 0 | 0 | 3 | false | 13 | 2011-04-06T08:51:00.000 | 18 | 5 | 0 | Check for positive definiteness or positive semidefiniteness | 5,563,743 | 1 | python,math,matrix,scipy,linear-algebra | I assume you already know your matrix is symmetric.
A good test for positive definiteness (actually the standard one !) is to try to compute its Cholesky factorization. It succeeds iff your matrix is positive definite.
This is the most direct way, since it needs O(n^3) operations (with a small constant), and you would need at least n matrix-vector multiplications to test "directly". | I want to check if a matrix is positive definite or positive semidefinite using Python.
How can I do that? Is there a dedicated function in SciPy for that or in other modules? | 0 | 1 | 19,269 |
0 | 5,568,485 | 0 | 0 | 0 | 1 | 1 | false | 6 | 2011-04-06T14:47:00.000 | 2 | 5 | 0 | use python to generate graph in excel | 5,568,319 | 0.07983 | python,excel,charts,export-to-excel | I suggest you to try gnuplot while drawing graph from data files. | I have been trying to generate data in Excel.
I generated .CSV file.
So up to that point it's easy.
But generating graph is quite hard in Excel...
I am wondering, is python able to generate data AND graph in excel?
If there are examples or code snippets, feel free to post it :)
Or a workaround can be use python to generate graph in graphical format like .jpg, etc or .pdf file is also ok..as long as workaround doesn't need dependency such as the need to install boost library. | 0 | 1 | 48,351 |
0 | 55,520,433 | 0 | 1 | 0 | 0 | 2 | false | 75 | 2011-04-07T03:00:00.000 | 15 | 3 | 0 | Difference between "axes" and "axis" in matplotlib? | 5,575,451 | 1 | python,matplotlib | in the context of matplotlib,
axes is not the plural form of axis, it actually denotes the plotting area, including all axis. | I'm confused about what the different between axes and axis is in matplotlib. Could someone please explain in an easy-to-understand way? | 0 | 1 | 16,337 |
0 | 5,575,468 | 0 | 1 | 0 | 0 | 2 | true | 75 | 2011-04-07T03:00:00.000 | 68 | 3 | 0 | Difference between "axes" and "axis" in matplotlib? | 5,575,451 | 1.2 | python,matplotlib | Axis is the axis of the plot, the thing that gets ticks and tick labels. The axes is the area your plot appears in. | I'm confused about what the different between axes and axis is in matplotlib. Could someone please explain in an easy-to-understand way? | 0 | 1 | 16,337 |
0 | 5,590,275 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2011-04-08T03:38:00.000 | 4 | 4 | 0 | Scheme-less database solution to work on one machine only? | 5,590,148 | 1.2 | python,linux,mongodb,database,nosql | MongoDB is likely a good choice related to performance, flexibility and usability (easy approachable). However large databases require careful planning - especially when it comes to aspects of backup and high-availability. Without further insight about project requirements there is little to say if one machine is enough or not (look at replica sets and sharding if you need options scale).
Update: based on your new information - should be doable with MongoDB (test and evaluate it). Easiliy spoken: MongoDB can be the "MySQL" of the NoSQL databases....if you know about SQL databases then you should be able to work with MongoDB easily since it borrows a lot of ideas and concept from the SQL world. Looking at your data model...it's trivial and data can be easily retrieved and stored (not going into details)..I suggest download MongoDB and walking through the tutorial. | I am looking for a scheme-less database to store roughly 10[TB] of data on disk, ideally, using a python client. The suggested solution should be free for commercial use, and have good performance for reads and writes.
The main goal here is to store time-series data, including more than billion records, accessed by time stamp.
Data would be stored in the following scheme:
KEY --> "FIELD_NAME.YYYYMMDD.HHMMSS"
VALUE --> [v1, v2, v3, v4, v5, v6] (v1..v6 are just floats)
For instance, suppose that:
FIELD_NAME = "TOMATO"
TIME_STAMP = "20060316.184356"
VALUES = [72.34, -22.83, -0.938, 0.265, -2047.23]
I need to be able to retrieve VALUE (the entire array) given the combination of FIELD_NAME & TIME_STAMP.
The query VALUES["TOMATO.20060316.184356"] would return the vector [72.34, -22.83, -0.938, 0.265, -2047.23]. Reads of arrays should be as fast as possible.
Yet, I also need a way to store (in-place) a scalar value within an array . Suppose that I want to assign the 1st element of TOMATO on timestamp 2006/03/16.18:43:56 to be 500.867. In such a case, I need to have a fast mechanism to do so -- something like:
VALUES["TOMATO.20060316.184356"][0] = 500.867 (this would update on disk)
Can something like MangoDB work? I will be using just one machine (no need for replication etc), running Linux.
CLARIFICATION: only one machine will be used to store the database. Yet, I need a solution that will allow multiple machines to connect to the same database and update/insert/read/write data to/from it. | 0 | 1 | 401 |
0 | 5,602,512 | 0 | 1 | 0 | 0 | 2 | false | 3 | 2011-04-09T02:22:00.000 | 2 | 6 | 0 | Random picks from permutation generator? | 5,602,488 | 0.066568 | python,permutation,random-sample | I don't know how python implements its shuffle algorithm, but the following scales in linear time, so I don't see why a length of 10 is such a big deal (unless I misunderstand your question?):
start with the list of items;
go through each index in the list in turn, swapping the item at that index it for an item at a random index (including the item itself) in the remainder of the list.
For a different permutation, just run the same algorithm again. | How to randomly pick all the results, one by one (no repeats) from itertools.permutations(k)? Or this: how to build a generator of randomized permutations? Something like shuffle(permutations(k)). I’m using Python 2.6.
Yeah, shuffle(r) could be used if r = list(permutations(k)), but such a list will take up too much time and memory when len(k) raises above 10.
Thanks. | 0 | 1 | 4,221 |
0 | 5,603,207 | 0 | 1 | 0 | 0 | 2 | false | 3 | 2011-04-09T02:22:00.000 | 2 | 6 | 0 | Random picks from permutation generator? | 5,602,488 | 0.066568 | python,permutation,random-sample | There's no way of doing what you have asked for without writing your own version of permutations.
Consider this:
We have a generator object containing the result of permutations.
We have written our own function to tell us the length of the generator.
We then pick an entries at random between the beginning of the list and the end.
Since I have a generator if the random function picks an entry near the end of the list the only way to get to it will be to go through all the prior entries and either throw them away, which is bad, or store them in a list, which you have pointed out is problematic when you have a lot of options.
Are you going to loop through every permutation or use just a few? If it's the latter it would make more sense to generate each new permutation at random and store then ones you've seen before in a set. If you don't use that many the overhead of having to create a new permutation each time you have a collision will be pretty low. | How to randomly pick all the results, one by one (no repeats) from itertools.permutations(k)? Or this: how to build a generator of randomized permutations? Something like shuffle(permutations(k)). I’m using Python 2.6.
Yeah, shuffle(r) could be used if r = list(permutations(k)), but such a list will take up too much time and memory when len(k) raises above 10.
Thanks. | 0 | 1 | 4,221 |
0 | 5,610,731 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2011-04-10T05:27:00.000 | 3 | 2 | 0 | How to save figures to pdf as raster images in matplotlib | 5,609,969 | 0.291313 | python,pdf,matplotlib,raster | Not that I know, but you can use the 'convert' program (ImageMagick') to convert a jpg to a pdf: `convert file.jpg file.pdf'. | I have some complex graphs made using matplotlib. Saving them to a pdf using the savefig command uses a vector format, and the pdf takes ages to open. Is there any way to save the figure to pdf as a raster image to get around this problem? | 0 | 1 | 5,667 |
0 | 5,651,029 | 0 | 0 | 0 | 0 | 1 | false | 12 | 2011-04-13T11:02:00.000 | 2 | 6 | 0 | Python tools to visualize 100k Vertices and 1M Edges? | 5,648,097 | 0.066568 | python,graph,wxpython,visualization | You should ask on the official wxPython mailing list. There are people there that can probably help you. I am surprised that matplotlib isn't able to do this though. It may just require you to restructure your code in some way. Right now, the main ways to draw in wxPython are via the various DCs, one of the FloatCanvas widgets or for graphing, wx.Plot or matplotlib. | I'm looking to visualize the data, hopefully make it interactive. Right now I'm using NetworkX and Matplotlib, which maxes out my 8gb when I attempt to 'draw' the graph. I don't know what options and techniques exist for handling such a large cluster** of data. If someone could point me in the right direction, that'd be great. I also have a CUDA enabled GFX card if that could be of use.
Right now I'm thinking of drawing only the most connected nodes, say top 5% of vertices with the most edges, then filling in less connected nodes as the user zooms or clicks. | 0 | 1 | 3,571 |
0 | 5,682,297 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2011-04-15T20:43:00.000 | 3 | 2 | 1 | Hadoop Vs. Disco Vs. Condor? | 5,682,175 | 1.2 | python,distributed-computing | I'm unfamiliar with Disco and Condor, but I can answer regarding Hadoop:
Hadoop pros:
Robust and proven - probably more than anything else out there. Used by many organizations (including the one I work for) to run clusters of 100s of nodes and more.
Large ecosystem = support + many subprojects to make life easier (e.g. Pig, Hive)
Python support should be possible through the streaming MR feature, or maybe Jython?
Hadoop cons:
Neither simple nor elegant (imho). You'll have to spend time learning. | I am trying to find a tool that will manage a bunch of jobs on 100 machines in a cluster (submit the jobs to the machines; make sure that jobs are run etc).
Which tool would be more simple to install / manage:
(1) Hadoop?
(2) Disco?
(3) Condor?
Ideally, I am searching for a solution that would be as simple as possible, yet be robust.
Python integration is also a plus. | 0 | 1 | 1,606 |
0 | 6,272,840 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2011-04-17T11:09:00.000 | 1 | 4 | 0 | Compoud charts with python | 5,693,151 | 0.049958 | python,charts | Pretty easy to do with pygooglechart -
You can basically follow the bar chart examples that ship with the software and then use the add_data_line method to make the lines on top of the bar chart | I want to generate compound charts (e.g: Bar+line) from my database using python.
How can i do this ?
Thanks in Advance | 0 | 1 | 277 |
0 | 5,697,057 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2011-04-17T22:49:00.000 | 3 | 1 | 0 | Python: How to load and use trained and pickled NLTK tagger to GAE? | 5,696,995 | 1.2 | python,google-app-engine,pickle,nltk | If your NLTK tagger code and data is of limited size, then carry it along with your GAE code.
If you have to act upon it to retrain the set, then storing the content of the file as a BLOB in the datastore would be an option, so that you get, analyze, retrain and put.But that will limit size of dataitem to be less than 1 MB because of GAE hardlimit. | I have a trained and pickled NLTK tagger (Brill's transformational rule-based tagger).
I want to use it on GAE. What the best way to do it? | 0 | 1 | 312 |
0 | 5,729,858 | 0 | 0 | 0 | 0 | 1 | true | 28 | 2011-04-19T19:49:00.000 | 13 | 3 | 0 | Python: Making numpy default to float32 | 5,721,831 | 1.2 | python,numpy,numbers | Not that I am aware of. You either need to specify the dtype explicitly when you call the constructor for any array, or cast an array to float32 (use the ndarray.astype method) before passing it to your GPU code (I take it this is what the question pertains to?). If it is the GPU case you are really worried about, I favor the latter - it can become very annoying to try and keep everything in single precision without an extremely thorough understanding of the numpy broadcasting rules and very carefully designed code.
Another alternative might be to create your own methods which overload the standard numpy constructors (so numpy.zeros, numpy.ones, numpy.empty). That should go pretty close to keeping everything in float32. | Is there any clean way of setting numpy to use float32 values instead of float64 globally? | 0 | 1 | 13,987 |
0 | 5,887,256 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2011-04-21T20:32:00.000 | 1 | 2 | 0 | Pros and cons to using sparse matrices in python/R? | 5,749,479 | 0.099668 | python,r,sparse-matrix | There are several ways to represent sparse matrices (documentation for the R SparseM package reports 20 different ways to store sparse matrix data), so complete compatibility with all solutions is probably out of question. The number options also suggests that there is no best-in-all-situations solution.
Pick either the numpy sparse matrices or R's SparseM (through rpy2) according to where your heavy number crunching routines on those matrices are found (numpy or R). | I'm working with large, sparse matrices (document-feature matrices generated from text) in python. It's taking quite a bit of processing time and memory to chew through these, and I imagine that sparse matrices could offer some improvements. But I'm worried that using a sparse matrix library is going to make it harder to plug into other python (and R, through rpy2) modules.
Can people who've crossed this bridge already offer some advice? What are the pros and cons of using sparse matrices in python/R, in terms of performance, scalability, and compatibility? | 0 | 1 | 872 |
0 | 5,796,773 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2011-04-26T17:26:00.000 | 0 | 2 | 0 | How to build numpy with conary recipe for x-86 CentOs | 5,794,093 | 1.2 | python,numpy,centos | the command $rbuild build packages numpy --no-recurse , should be executed at the level where the package is, not in Development/QA or Release Directories. | i am getting error:
rbuild build packages numpy --no-recurse
[Tue Apr 26 13:16:53 2011] Creating rMake build job for 1 items
Added Job 902
numpy:source=rmake-repository.bericots.com@local:taf32-sandbox-1-devel/1-0.2
[13:17:23] Watching job 902
[13:17:24] [902] Loading 1 out of 1: numpy:source
[13:17:26] [902] - State: Loaded
[13:17:26] [902] - job troves set
[13:17:27] [902] - State: Building
[13:17:27] [902] - Building troves
d all buildreqs: [TroveSpec('python2.4:runtime'), TroveSpec('python2.4:devel')]
[13:17:27] [902] - numpy:source{x86} - State: Queued
[13:17:27] [902] - numpy:source{x86} - Ready for dep resolution
[13:17:28] [902] - numpy:source{x86} - State: Resolving
[13:17:28] [902] - numpy:source{x86} - Resolving build requirements
[13:17:28] [902] - State: Failed
[13:17:28] [902] - Failed while building: Build job had failures:
* numpy:source: Could not satisfy build requirements: python2.4:runtime=[], python2.4:devel=[]
902 numpy{x86} - Resolving - (Job Failed) ([h]elp)>
error: Job 902 has failures, not committing
error: Package build failed
Any ideas ? | 0 | 1 | 170 |
0 | 5,816,755 | 0 | 1 | 0 | 0 | 1 | false | 6 | 2011-04-27T21:50:00.000 | 4 | 3 | 0 | confidence interval with leastsq fit in scipy python | 5,811,043 | 0.26052 | python,scipy,least-squares,confidence-interval | I am not sure what you mean by confidence interval.
In general, leastsq doesn't know much about the function that you are trying to minimize, so it can't really give a confidence interval. However, it does return an estimate of the Hessian, in other word the generalization of 2nd derivatives to multidimensional problems.
As hinted in the docstring of the function, you could use that information along with the residuals (the difference between your fitted solution and the actual data) to computed the covariance of parameter estimates, which is a local guess of the confidence interval.
Note that it is only a local information, and I suspect that you can strictly speaking come to a conclusion only if your objective function is strictly convex. I don't have any proofs or references on that statement :). | How to calculate confidence interval for the least square fit (scipy.optimize.leastsq) in python? | 0 | 1 | 6,800 |
0 | 5,821,933 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2011-04-28T16:27:00.000 | 0 | 2 | 0 | Matplotlib histogram, frequency as thousands | 5,821,895 | 0 | python,graph,matplotlib,histogram | Just convert the values yourself before they are entered. In numpy, you can do just array/1000 instead of array. | I have a histogram I'm drawing in matplotlib with some 260,000 values or so.
The problem is that the frequency axis (y axis) on the histogram reaches high numbers such as 100,000... What I'd really like is to have the y labels as thousands, so instead of, for instance:
100000
75000
50000
25000
0
To have this:
100
75
50
25
0
And then I can simply change the y axis to "Frequency (000s)" -- it makes it much easier to read that way. Anyone with any ideas how that can be achieved? | 0 | 1 | 3,743 |
0 | 6,029,955 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2011-04-29T02:27:00.000 | -1 | 2 | 0 | PyCUDA/CUDA: Causes of non-deterministic launch failures? | 5,827,219 | -0.099668 | python,cuda,gpgpu,pycuda | You can use nVidia CUDA Profiler and see what gets executed before the failure. | Anyone following CUDA will probably have seen a few of my queries regarding a project I'm involved in, but for those who haven't I'll summarize. (Sorry for the long question in advance)
Three Kernels, One Generates a data set based on some input variables (deals with bit-combinations so can grow exponentially), another solves these generated linear systems, and another reduction kernel to get the final result out. These three kernels are ran over and over again as part of an optimisation algorithm for a particular system.
On my dev machine (Geforce 9800GT, running under CUDA 4.0) this works perfectly, all the time, no matter what I throw at it (up to a computational limit based on the stated exponential nature), but on a test machine (4xTesla S1070's, only one used, under CUDA 3.1) the exact same code (Python base, PyCUDA interface to CUDA kernels), produces the exact results for 'small' cases, but in mid-range cases, the solving stage fails on random iterations.
Previous problems I've had with this code have been to do with the numeric instability of the problem, and have been deterministic in nature (i.e fails at exactly the same stage every time), but this one is frankly pissing me off, as it will fail whenever it wants to.
As such, I don't have a reliable way to breaking the CUDA code out from the Python framework and doing proper debugging, and PyCUDA's debugger support is questionable to say the least.
I've checked the usual things like pre-kernel-invocation checking of free memory on the device, and occupation calculations say that the grid and block allocations are fine. I'm not doing any crazy 4.0 specific stuff, I'm freeing everything I allocate on the device at each iteration and I've fixed all the data types as being floats.
TL;DR, Has anyone come across any gotchas regarding CUDA 3.1 that I haven't seen in the release notes, or any issues with PyCUDA's autoinit memory management environment that would cause intermittent launch failures on repeated invocations? | 0 | 1 | 913 |