GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
3,188,680
0
0
0
0
2
false
6
2010-07-02T16:58:00.000
0
3
1
Picking a front-end/interpreter for a scientific code
3,167,661
0
c++,python,matlab,tcl,interpreter
Well, unless there are any other suggestions, the final answer I have arrived at is to go with Python. I seriously considered matlab/octave, but when reading the octave API and matlab API, they are different enough that I'd need to build separate interfaces for each (or get very creative with macros). With python I end up with a single, easier to maintain codebase for the front end, and it is used by just about everyone we know. Thanks for the tips/feedback everyone!
The simulation tool I have developed over the past couple of years, is written in C++ and currently has a tcl interpreted front-end. It was written such that it can be run either in an interactive shell, or by passing an input file. Either way, the input file is written in tcl (with many additional simulation-specific commands I have added). This allows for quite powerful input files (e.g.- when running monte-carlo sims, random distributions can be programmed as tcl procedures directly in the input file). Unfortunately, I am finding that the tcl interpreter is becoming somewhat limited compared to what more modern interpreted languages have to offer, and its syntax seems a bit arcane. Since the computational engine was written as a library with a c-compatible API, it should be straightforward to write alternative front-ends, and I am thinking of moving to a new interpreter, however I am having a bit of a time choosing (mostly because I don't have significant experience with many interpreted languages). The options I have begun to explore are as follows: Remaining with tcl: Pros: - No need to change the existing code. - Existing input files stay the same. (though I'd probably keep the tcl front end as an option) - Mature language with lots of community support. Cons: - Feeling limited by the language syntax. - Getting complaints from users as to the difficulty of learning tcl. Python: Pros: - Modern interpreter, known to be quite efficient. - Large, active community. - Well known scientific and mathematical modules, such as scipy. - Commonly used in the academic Scientific/engineering community (typical users of my code) Cons: - I've never used it and thus would take time to learn the language (this is also a pro, as I've been meaning to learn python for quite some time) - Strict formatting of the input files (indentation, etc..) Matlab: Pros: - Very power and widely used mathematical tool - Powerful built-in visualization/plotting. - Extensible, through community submitted code, as well as commercial toolboxes. - Many in science/engineering academia is familiar with and comfortable with matlab. Cons: - Can not distribute as an executable- would need to be an add-on/toolbox. - Would require (?) the matlab compiler (which is pricy). - Requires Matlab, which is also pricy. These pros and cons are what I've been able to come up with, though I have very little experience with interpreted languages in general. I'd love to hear any thoughts on both the interpreters I've proposed here, if these pros/cons listed are legitimate, and any other interpreters I haven't thought of (e.g.- would php be appropriate for something like this? lua?). First hand experience with embedding an interpreter in your code is definitely a plus!
0
1
453
0
3,168,060
0
0
0
0
2
false
6
2010-07-02T16:58:00.000
3
3
1
Picking a front-end/interpreter for a scientific code
3,167,661
0.197375
c++,python,matlab,tcl,interpreter
Have you considered using Octave? From what I gather, it is nearly a drop-in replacement for much of matlab. This might allow you to support matlab for those who have it, and a free alternative for those who don't. Since the "meat" of your program appears to be written in another language, the performance considerations seem to be not as important as providing an environment that has: plotting and visualization capabilities, is cross-platform, has a big user base, and in a language that nearly everyone in academia and/or involved with modelling fluid flow probably already knows. Matlab/Octave can potentially have all of those.
The simulation tool I have developed over the past couple of years, is written in C++ and currently has a tcl interpreted front-end. It was written such that it can be run either in an interactive shell, or by passing an input file. Either way, the input file is written in tcl (with many additional simulation-specific commands I have added). This allows for quite powerful input files (e.g.- when running monte-carlo sims, random distributions can be programmed as tcl procedures directly in the input file). Unfortunately, I am finding that the tcl interpreter is becoming somewhat limited compared to what more modern interpreted languages have to offer, and its syntax seems a bit arcane. Since the computational engine was written as a library with a c-compatible API, it should be straightforward to write alternative front-ends, and I am thinking of moving to a new interpreter, however I am having a bit of a time choosing (mostly because I don't have significant experience with many interpreted languages). The options I have begun to explore are as follows: Remaining with tcl: Pros: - No need to change the existing code. - Existing input files stay the same. (though I'd probably keep the tcl front end as an option) - Mature language with lots of community support. Cons: - Feeling limited by the language syntax. - Getting complaints from users as to the difficulty of learning tcl. Python: Pros: - Modern interpreter, known to be quite efficient. - Large, active community. - Well known scientific and mathematical modules, such as scipy. - Commonly used in the academic Scientific/engineering community (typical users of my code) Cons: - I've never used it and thus would take time to learn the language (this is also a pro, as I've been meaning to learn python for quite some time) - Strict formatting of the input files (indentation, etc..) Matlab: Pros: - Very power and widely used mathematical tool - Powerful built-in visualization/plotting. - Extensible, through community submitted code, as well as commercial toolboxes. - Many in science/engineering academia is familiar with and comfortable with matlab. Cons: - Can not distribute as an executable- would need to be an add-on/toolbox. - Would require (?) the matlab compiler (which is pricy). - Requires Matlab, which is also pricy. These pros and cons are what I've been able to come up with, though I have very little experience with interpreted languages in general. I'd love to hear any thoughts on both the interpreters I've proposed here, if these pros/cons listed are legitimate, and any other interpreters I haven't thought of (e.g.- would php be appropriate for something like this? lua?). First hand experience with embedding an interpreter in your code is definitely a plus!
0
1
453
0
3,177,037
0
0
0
0
3
false
14
2010-07-05T02:25:00.000
1
5
0
What is a good first-implementation for learning machine learning?
3,176,967
0.039979
python,computer-science,artificial-intelligence,machine-learning
Decision tree. It is frequently used in classification tasks and has a lot of variants. Tom Mitchell's book is a good reference to implement it.
I find learning new topics comes best with an easy implementation to code to get the idea. This is how I learned genetic algorithms and genetic programming. What would be some good introductory programs to write to get started with machine learning? Preferably, let any referenced resources be accessible online so the community can benefit
0
1
5,717
0
3,182,779
0
0
0
0
3
false
14
2010-07-05T02:25:00.000
1
5
0
What is a good first-implementation for learning machine learning?
3,176,967
0.039979
python,computer-science,artificial-intelligence,machine-learning
Neural nets may be the easiest thing to implement first, and they're fairly thoroughly covered throughout literature.
I find learning new topics comes best with an easy implementation to code to get the idea. This is how I learned genetic algorithms and genetic programming. What would be some good introductory programs to write to get started with machine learning? Preferably, let any referenced resources be accessible online so the community can benefit
0
1
5,717
0
3,177,077
0
0
0
0
3
false
14
2010-07-05T02:25:00.000
-8
5
0
What is a good first-implementation for learning machine learning?
3,176,967
-1
python,computer-science,artificial-intelligence,machine-learning
There is something called books; are you familiar with those? When I was exploring AI two decades ago, there were many books. I guess now that the internet exists, books are archaic, but you can probably find some in an ancient library.
I find learning new topics comes best with an easy implementation to code to get the idea. This is how I learned genetic algorithms and genetic programming. What would be some good introductory programs to write to get started with machine learning? Preferably, let any referenced resources be accessible online so the community can benefit
0
1
5,717
0
3,207,845
0
0
0
0
1
true
5
2010-07-07T10:07:00.000
2
2
0
AdaBoost ML algorithm python implementation
3,193,756
1.2
python,machine-learning,adaboost
Thanks a million Steve! In fact, your suggestion had some compatibility issues with MacOSX (a particular library was incompatible with the system) BUT it helped me find out a more interesting package : icsi.boost.macosx. I am just denoting that in case any Mac-eter finds it interesting! Thank you again! Tim
Is there anyone that has some ideas on how to implement the AdaBoost (Boostexter) algorithm in python? Cheers!
0
1
4,289
0
3,227,905
0
0
0
0
2
false
25
2010-07-12T10:57:00.000
2
5
0
Determine height of Coffee in the pot using Python imaging
3,227,843
0.07983
python,image-processing
First do thresholding, then segmentation. Then you can more easily detect edges.
We have a web-cam in our office kitchenette focused at our coffee maker. The coffee pot is clearly visible. Both the location of the coffee pot and the camera are static. Is it possible to calculate the height of coffee in the pot using image recognition? I've seen image recognition used for quite complex stuff like face-recognition. As compared to those projects, this seems to be a trivial task of measuring the height. (That's my best guess and I have no idea of the underlying complexities.) How would I go about this? Would this be considered a very complex job to partake? FYI, I've never done any kind of imaging-related work.
0
1
1,297
0
3,231,205
0
0
0
0
2
false
25
2010-07-12T10:57:00.000
0
5
0
Determine height of Coffee in the pot using Python imaging
3,227,843
0
python,image-processing
make pictures of the pot with different levels of coffe in it. downsample the image to maybe 4*10 pixels. make the same in a loop for each new live picture. calculate the difference of each pixels value compared to the reference images. take the reference image with the least difference sum and you get the state of your coffe machine. you might experiment if a grayscale version or only red or green might give better results. if it gives problems with different light settings this aproach is useless. just buy a spotlight for the coffe machine, or lighten up, or darken each picture till the sum of all pixels reaches a reference value.
We have a web-cam in our office kitchenette focused at our coffee maker. The coffee pot is clearly visible. Both the location of the coffee pot and the camera are static. Is it possible to calculate the height of coffee in the pot using image recognition? I've seen image recognition used for quite complex stuff like face-recognition. As compared to those projects, this seems to be a trivial task of measuring the height. (That's my best guess and I have no idea of the underlying complexities.) How would I go about this? Would this be considered a very complex job to partake? FYI, I've never done any kind of imaging-related work.
0
1
1,297
0
3,242,538
0
0
0
0
1
false
27
2010-07-13T23:37:00.000
1
6
0
Interpolation over an irregular grid
3,242,382
0.033321
python,numpy,scipy,interpolation
There's a bunch of options here, which one is best will depend on your data... However I don't know of an out-of-the-box solution for you You say your input data is from tripolar data. There are three main cases for how this data could be structured. Sampled from a 3d grid in tripolar space, projected back to 2d LAT, LON data. Sampled from a 2d grid in tripolar space, projected into 2d LAT LON data. Unstructured data in tripolar space projected into 2d LAT LON data The easiest of these is 2. Instead of interpolating in LAT LON space, "just" transform your point back into the source space and interpolate there. Another option that works for 1 and 2 is to search for the cells that maps from tripolar space to cover your sample point. (You can use a BSP or grid type structure to speed up this search) Pick one of the cells, and interpolate inside it. Finally there's a heap of unstructured interpolation options .. but they tend to be slow. A personal favourite of mine is to use a linear interpolation of the nearest N points, finding those N points can again be done with gridding or a BSP. Another good option is to Delauney triangulate the unstructured points and interpolate on the resulting triangular mesh. Personally if my mesh was case 1, I'd use an unstructured strategy as I'd be worried about having to handle searching through cells with overlapping projections. Choosing the "right" cell would be difficult.
So, I have three numpy arrays which store latitude, longitude, and some property value on a grid -- that is, I have LAT(y,x), LON(y,x), and, say temperature T(y,x), for some limits of x and y. The grid isn't necessarily regular -- in fact, it's tripolar. I then want to interpolate these property (temperature) values onto a bunch of different lat/lon points (stored as lat1(t), lon1(t), for about 10,000 t...) which do not fall on the actual grid points. I've tried matplotlib.mlab.griddata, but that takes far too long (it's not really designed for what I'm doing, after all). I've also tried scipy.interpolate.interp2d, but I get a MemoryError (my grids are about 400x400). Is there any sort of slick, preferably fast way of doing this? I can't help but think the answer is something obvious... Thanks!!
0
1
28,537
0
32,098,823
0
0
0
0
1
false
184
2010-07-26T17:25:00.000
2
10
0
Numpy matrix to array
3,337,301
0.039979
python,arrays,matrix,numpy
First, Mv = numpy.asarray(M.T), which gives you a 4x1 but 2D array. Then, perform A = Mv[0,:], which gives you what you want. You could put them together, as numpy.asarray(M.T)[0,:].
I am using numpy. I have a matrix with 1 column and N rows and I want to get an array from with N elements. For example, if i have M = matrix([[1], [2], [3], [4]]), I want to get A = array([1,2,3,4]). To achieve it, I use A = np.array(M.T)[0]. Does anyone know a more elegant way to get the same result? Thanks!
0
1
319,127
0
3,352,172
0
0
0
0
1
false
6
2010-07-28T10:28:00.000
1
3
0
Good framework for live charting in Python?
3,351,963
0.066568
python,live,charts
I havent worked with Matplotlib but I've always found gnuplot to be adequate for all my charting needs. You have the option of calling gnuplot from python or using gnuplot.py (gnuplot-py.sourceforge.net) to interface to gnuplot.
I am working on a Python application that involves running regression analysis on live data, and charting both. That is, the application gets fed with live data, and the regression models re-calculates as the data updates. Please note that I want to plot both the input (the data) and output (the regression analysis) in the same one chart. I have previously done some work with Matplotlib. Is that the best framework for this? It seems to be fairly static, I can't find any good examples similar to mine above. It also seems pretty bloated to me. Performance is key, so if there is any fast python charting framework out there with a small footprint, I'm all ears...
0
1
5,782
0
3,355,927
0
0
1
0
2
false
4
2010-07-28T17:52:00.000
2
6
0
Suggestions for passing large table between Python and C#
3,355,832
0.066568
c#,python,file
You may consider running IronPython - then you can pass values back and forth across C#/Python
I have a C# application that needs to be run several thousand times. Currently it precomputes a large table of constant values at the start of the run for reference. As these values will be the same from run to run I would like to compute them independently in a simple python script and then just have the C# app import the file at the start of each run. The table consists of a sorted 2D array (500-3000+ rows/columns) of simple (int x, double y) tuples. I am looking for recommendations concerning the best/simplest way to store and then import this data. For example, I could store the data in a text file like this "(x1,y1)|(x2,y2)|(x3,y3)|...|(xn,yn)" This seems like a very ugly solution to a problem that seems to lend itself to a specific data structure or library I am currently unaware of. Any suggestions would be welcome.
0
1
1,046
0
3,356,036
0
0
1
0
2
false
4
2010-07-28T17:52:00.000
1
6
0
Suggestions for passing large table between Python and C#
3,355,832
0.033321
c#,python,file
CSV is fine suggestion, but may be clumsy with values being int and double. Generally tab or semicomma are best separators.
I have a C# application that needs to be run several thousand times. Currently it precomputes a large table of constant values at the start of the run for reference. As these values will be the same from run to run I would like to compute them independently in a simple python script and then just have the C# app import the file at the start of each run. The table consists of a sorted 2D array (500-3000+ rows/columns) of simple (int x, double y) tuples. I am looking for recommendations concerning the best/simplest way to store and then import this data. For example, I could store the data in a text file like this "(x1,y1)|(x2,y2)|(x3,y3)|...|(xn,yn)" This seems like a very ugly solution to a problem that seems to lend itself to a specific data structure or library I am currently unaware of. Any suggestions would be welcome.
0
1
1,046
0
3,357,139
0
0
1
0
1
false
2
2010-07-28T18:06:00.000
0
2
0
What is the best way to load a CCITT T.3 compressed tiff using python?
3,355,962
0
python,compression,tiff,imaging,image-formats
How about running tiffcp with subprocess to convert to LZW (-c lzw switch), then process normally with pylibtiff? There are Windows builds of tiffcp lying around on the web. Not exactly Python-native solution, but still...
I am trying to load a CCITT T.3 compressed tiff into python, and get the pixel matrix from it. It should just be a logical matrix. I have tried using pylibtiff and PIL, but when I load it with them, the matrix it returns is empty. I have read in a lot of places that these two tools support loading CCITT but not accessing the pixels. I am open to converting the image, as long as I can get the logical matrix from it and do it in python code. The crazy thing is is that if I open one of my images in paint, save it without altering it, then try to load it with pylibtiff, it works. Paint re-compresses it to the LZW compression. So I guess my real question is: Is there a way to either natively load CCITT images to matricies or convert the images to LZW using python?? Thanks, tylerthemiler
0
1
1,297
0
3,368,847
0
0
0
0
1
true
7
2010-07-30T04:43:00.000
6
2
0
Zooming With Python Image Library
3,368,740
1.2
python,image-processing,python-imaging-library
You would be much better off using the EXTENT rather than the AFFINE method. You only need to calculate two things: what part of the input you want to see, and how large it should be. For example, if you want to see the whole image scaled down to half size (i.e. zooming out by 2), you'd pass the data (0, 0, im.size[0], im.size[1]) and the size (im.size[0]/2, im.size[1]/2).
I'm writing a simple application in Python which displays images.I need to implement Zoom In and Zoom Out by scaling the image. I think the Image.transform method will be able to do this, but I'm not sure how to use it, since it's asking for an affine matrix or something like that :P Here's the quote from the docs: im.transform(size, AFFINE, data, filter) => image Applies an affine transform to the image, and places the result in a new image with the given size. Data is a 6-tuple (a, b, c, d, e, f) which contain the first two rows from an affine transform matrix. For each pixel (x, y) in the output image, the new value is taken from a position (a x + b y + c, d x + e y + f) in the input image, rounded to nearest pixel. This function can be used to scale, translate, rotate, and shear the original image.
0
1
7,941
0
3,374,079
0
0
0
0
1
false
27
2010-07-30T14:30:00.000
0
5
0
Profile Memory Allocation in Python (with support for Numpy arrays)
3,372,444
0
python,numpy,memory-management,profile
Can you just save/pickle some of the arrays to disk in tmp files when not using them? That's what I've had to do in the past with large arrays. Of course this will slow the program down, but at least it'll finish. Unless you need them all at once?
I have a program that contains a large number of objects, many of them Numpy arrays. My program is swapping miserably, and I'm trying to reduce the memory usage, because it actually can't finis on my system with the current memory requirements. I am looking for a nice profiler that would allow me to check the amount of memory consumed by various objects (I'm envisioning a memory counterpart to cProfile) so that I know where to optimize. I've heard decent things about Heapy, but Heapy unfortunately does not support Numpy arrays, and most of my program involves Numpy arrays.
0
1
4,821
0
20,171,517
0
0
0
0
1
false
104
2010-08-05T10:39:00.000
4
7
0
How to create an empty R vector to add new items
3,413,879
0.113791
python,r,vector,rpy2
As pointed out by Brani, vector() is a solution, e.g. newVector <- vector(mode = "numeric", length = 50) will return a vector named "newVector" with 50 "0"'s as initial values. It is also fairly common to just add the new scalar to an existing vector to arrive at an expanded vector, e.g. aVector <- c(aVector, newScalar)
I want to use R in Python, as provided by the module Rpy2. I notice that R has very convenient [] operations by which you can extract the specific columns or lines. How could I achieve such a function by Python scripts? My idea is to create an R vector and add those wanted elements into this vector so that the final vector is the same as that in R. I created a seq(), but it seems that it has an initial digit 1, so the final result would always start with the digit 1, which is not what I want. So, is there a better way to do this?
0
1
291,651
0
4,092,003
0
0
0
0
1
false
12
2010-08-10T11:23:00.000
-5
4
0
PDF image in PDF document using ReportLab (Python)
3,448,365
-1
python,image,pdf,reportlab
Use from reportlab.graphics import renderPDF
I saved some plots from matplotlib into a pdf format because it seems to offer a better quality. How do I include the PDF image into a PDF document using ReportLab? The convenience method Image(filepath) does not work for this format. Thank you.
0
1
7,751
0
3,494,982
0
1
0
0
2
true
31
2010-08-16T12:29:00.000
33
6
0
Convert image to a matrix in python
3,493,092
1.2
python,image-processing,numpy,python-imaging-library
scipy.misc.imread() will return a Numpy array, which is handy for lots of things.
I want to do some image processing using Python. Is there a simple way to import .png image as a matrix of greyscale/RGB values (possibly using PIL)?
0
1
77,563
0
50,263,426
0
1
0
0
2
false
31
2010-08-16T12:29:00.000
7
6
0
Convert image to a matrix in python
3,493,092
1
python,image-processing,numpy,python-imaging-library
scipy.misc.imread() is deprecated now. We can use imageio.imread instead of that to read it as a Numpy array
I want to do some image processing using Python. Is there a simple way to import .png image as a matrix of greyscale/RGB values (possibly using PIL)?
0
1
77,563
0
3,520,715
0
0
0
0
2
false
1
2010-08-19T10:07:00.000
1
2
0
Can scipy calculate (double) integrals with complex-valued integrands (real and imaginary parts in integrand)?
3,520,672
0.099668
python,scipy
Yes. Those integrals (I'll assume they're area integrals over a region in 2D space) can be calculated using an appropriate quadrature rule. You can also use Green's theorem to convert them into contour integrals and use Gaussian quadrature to integrate along the path.
(Couldn't upload the picture showing the integral as I'm a new user.)
0
1
571
0
3,526,740
0
0
0
0
2
false
1
2010-08-19T10:07:00.000
0
2
0
Can scipy calculate (double) integrals with complex-valued integrands (real and imaginary parts in integrand)?
3,520,672
0
python,scipy
Thanks duffymo! I am calculating Huygens-Fresnel diffraction integrals: plane and other wave diffraction through circular (2D) apertures in polar coordinates. As far as the programming goes: Currently a lot of my code is in Mathematica. I am considering changing to one of: scipy, java + flanagan math library, java + apache commons math library, gnu scientific library, or octave. My first candidate for evaluation is scipy, but if it cannot handle complex-valued integrands, then I have to change my plans for the weekend...
(Couldn't upload the picture showing the integral as I'm a new user.)
0
1
571
0
3,605,012
0
0
0
0
2
true
0
2010-08-26T15:00:00.000
1
2
0
Python/Numpy error: NULL result without error in PyObject_Call
3,576,430
1.2
python,numpy
It appears that this may have been an error from using the 32-bit version of NumPy and not the 64 bit. For whatever reason, though the program has no problem keeping the array in memory, it trips up when writing the array to a file if the number of elements in the array is greater than 2^32.
I've never seen this error before, and none of the hits on Google seem to apply. I've got a very large NumPy array that holds Boolean values. When I try writing the array using numpy.dump(), I get the following error: SystemError: NULL result without error in PyObject_Call The array is initialized with all False values, and the only time I ever access it is to set some of the values to True, so I have no idea why any of the values would be null. When I try running the same program with a smaller array, I get no error. However, since the error occurs at the writing step, I don't think that it's a memory issue. Has anybody else seen this error before?
0
1
3,309
0
3,576,712
0
0
0
0
2
false
0
2010-08-26T15:00:00.000
1
2
0
Python/Numpy error: NULL result without error in PyObject_Call
3,576,430
0.099668
python,numpy
That message comes directly from the CPython interpreter (see abstract.c method PyObject_Call). You may get a better response on a Python or NumPy mailing list regarding that error message because it looks like a problem in C code. Write a simple example to demonstrating the problem and you should be able to narrow the issue down to a module then a method.
I've never seen this error before, and none of the hits on Google seem to apply. I've got a very large NumPy array that holds Boolean values. When I try writing the array using numpy.dump(), I get the following error: SystemError: NULL result without error in PyObject_Call The array is initialized with all False values, and the only time I ever access it is to set some of the values to True, so I have no idea why any of the values would be null. When I try running the same program with a smaller array, I get no error. However, since the error occurs at the writing step, I don't think that it's a memory issue. Has anybody else seen this error before?
0
1
3,309
0
21,390,123
0
0
0
0
3
false
6
2010-09-03T21:18:00.000
3
7
0
Importing SPSS dataset into Python
3,639,639
0.085505
python,import,dataset,spss
Option 1 As rkbarney pointed out, there is the Python savReaderWriter available via pypi. I've run into two issues: It relies on a lot of extra libraries beyond the seemingly pure-python implementation. SPSS files are read and written in nearly every case by the IBM provided SPSS I/O modules. These modules differ by platform and in my experience "pip install savReaderWriter" doesn't get them running out of the box (on OS X). Development on savReaderWriter is, while not dead, less up-to-date than one might hope. This complicates the first issue. It relies on some deprecated packages to increase speed and gives some warnings any time you import savReaderWriter if they're not available. Not a huge issue today but it could be trouble in the future as IBM continues to update the SPSS I/O modules to deal new SPSS formats (they're on version 21 or 22 already if memory serves). Option 2 I've chosen to use R as a middle-man. Using rpy2, I set up a simple function to read the file into an R data frame and output it again as a CSV file which I subsequently import into python. It's a bit rube-goldberg but it works. Of course, this requires R which may also be a hassle to install in your environment (and has different binaries for different platforms).
Is there any way to import SPSS dataset into Python, preferably NumPy recarray format? I have looked around but could not find any answer. Joon
0
1
9,605
0
3,691,267
0
0
0
0
3
false
6
2010-09-03T21:18:00.000
1
7
0
Importing SPSS dataset into Python
3,639,639
0.028564
python,import,dataset,spss
To be clear, the SPSS ODBC driver does not require an SPSS installation.
Is there any way to import SPSS dataset into Python, preferably NumPy recarray format? I have looked around but could not find any answer. Joon
0
1
9,605
0
3,640,019
0
0
0
0
3
false
6
2010-09-03T21:18:00.000
3
7
0
Importing SPSS dataset into Python
3,639,639
0.085505
python,import,dataset,spss
SPSS has an extensive integration with Python, but that is meant to be used with SPSS (now known as IBM SPSS Statistics). There is an SPSS ODBC driver that could be used with Python ODBC support to read a sav file.
Is there any way to import SPSS dataset into Python, preferably NumPy recarray format? I have looked around but could not find any answer. Joon
0
1
9,605
0
3,650,761
0
0
1
0
1
false
67
2010-09-06T09:04:00.000
25
3
0
Are NumPy's math functions faster than Python's?
3,650,194
1
python,performance,numpy
You should use numpy function to deal with numpy's types and use regular python function to deal with regular python types. Worst performance usually occurs when mixing python builtins with numpy, because of types conversion. Those type conversion have been optimized lately, but it's still often better to not use them. Of course, your mileage may vary, so use profiling tools to figure out. Also consider the use of programs like cython or making a C module if you want to optimize further your program. Or consider not to use python when performances matters. but, when your data has been put into a numpy array, then numpy can be really fast at computing bunch of data.
I have a function defined by a combination of basic math functions (abs, cosh, sinh, exp, ...). I was wondering if it makes a difference (in speed) to use, for example, numpy.abs() instead of abs()?
0
1
39,385
0
3,686,359
0
0
0
0
1
false
4
2010-09-09T21:49:00.000
1
1
0
Compensate for Auto White Balance with OpenCV
3,680,829
0.197375
python,opencv,webcam,touchscreen,background-subtraction
You could try interfacing your camera through DirectShow and turn off Auto White Balance through your code or you could try first with the camera software deployed with it. It often gives you ability to do certain modifications as white balance and similar stuff.
I'm working on an app that takes in webcam data, applies various transformations, blurs and then does a background subtraction and threshold filter. It's a type of optical touch screen retrofitting system (the design is so different that tbeta/touchlib can't be used). The camera's white balance is screwing up the threshold filter by brightening everything whenever a user's hand is seen and darkening when it leaves, causing one of those to exhibit immense quantities of static. Is there a good way to counteract it? Is taking a corner, assuming it's constant and adjusting the rest of the image's brightness so that it stays constant a good idea?
0
1
4,888
0
3,688,556
0
1
0
0
2
false
5
2010-09-10T19:29:00.000
0
6
0
Storing an inverted index
3,687,715
0
python,information-retrieval,inverted-index
You could store the repr() of the dictionary and use that to re-create it.
I am working on a project on Info Retrieval. I have made a Full Inverted Index using Hadoop/Python. Hadoop outputs the index as (word,documentlist) pairs which are written on the file. For a quick access, I have created a dictionary(hashtable) using the above file. My question is, how do I store such an index on disk that also has quick access time. At present I am storing the dictionary using python pickle module and loading from it but it brings the whole of index into memory at once (or does it?). Please suggest an efficient way of storing and searching through the index. My dictionary structure is as follows (using nested dictionaries) {word : {doc1:[locations], doc2:[locations], ....}} so that I can get the documents containing a word by dictionary[word].keys() ... and so on.
0
1
3,666
0
5,341,353
0
1
0
0
2
false
5
2010-09-10T19:29:00.000
0
6
0
Storing an inverted index
3,687,715
0
python,information-retrieval,inverted-index
I am using anydmb for that purpose. Anydbm provides the same dictionary-like interface, except it allow only strings as keys and values. But this is not a constraint since you can use cPickle's loads/dumps to store more complex structures in the index.
I am working on a project on Info Retrieval. I have made a Full Inverted Index using Hadoop/Python. Hadoop outputs the index as (word,documentlist) pairs which are written on the file. For a quick access, I have created a dictionary(hashtable) using the above file. My question is, how do I store such an index on disk that also has quick access time. At present I am storing the dictionary using python pickle module and loading from it but it brings the whole of index into memory at once (or does it?). Please suggest an efficient way of storing and searching through the index. My dictionary structure is as follows (using nested dictionaries) {word : {doc1:[locations], doc2:[locations], ....}} so that I can get the documents containing a word by dictionary[word].keys() ... and so on.
0
1
3,666
0
19,403,571
0
0
0
0
1
false
2
2010-09-11T22:55:00.000
0
4
0
How to find the average of multiple columns in a file using python
3,692,996
0
python
Less of an answer than it is an alternative understanding of the problem: You could think of each line being a vector. In this way, the average done column-by-column is just the average of each of these vectors. All you need in order to do this is A way to read a line into a vector object, A vector addition operation, Scalar multiplication (or division) of vectors. Python comes (I think) with most of this already installed, but this should lead to some easily readable code.
Hi I have a file that consists of too many columns to open in excel. Each column has 10 rows of numerical values 0-2 and has a row saying the title of the column. I would like the output to be the name of the column and the average value of the 10 rows. The file is too large to open in excel 2000 so I have to try using python. Any tips on an easy way to do this. Here is a sample of the first 3 columns: Trial1 Trial2 Trial3 1 0 1 0 0 0 0 2 0 2 2 2 1 1 1 1 0 1 0 0 0 0 2 0 2 2 2 1 1 1 I want python to output as a test file Trial 1 Trial 2 Trial 3 1 2 1 (whatever the averages are)
0
1
5,272
0
3,704,637
0
0
0
0
1
false
24
2010-09-13T21:29:00.000
20
4
0
In Python small floats tending to zero
3,704,570
1
python,floating-point,numerical-stability
Would it be possible to do your work in a logarithmic space? (For example, instead of storing 1e-320, just store -320, and use addition instead of multiplication)
I have a Bayesian Classifier programmed in Python, the problem is that when I multiply the features probabilities I get VERY small float values like 2.5e-320 or something like that, and suddenly it turns into 0.0. The 0.0 is obviously of no use to me since I must find the "best" class based on which class returns the MAX value (greater value). What would be the best way to deal with this? I thought about finding the exponential portion of the number (-320) and, if it goes too low, multiplying the value by 1e20 or some value like that. But maybe there is a better way?
0
1
17,743
0
3,762,217
0
0
1
0
2
false
4
2010-09-21T15:45:00.000
2
5
0
Python vs. C++ for an application that does sparse linear algebra
3,761,994
0.07983
c++,python,linear-algebra
I don't have directly applicable experience, but the scipy/numpy operations are almost all implemented in C. As long as most of what you need to do is expressed in terms of scipy/numpy functions, then your code shouldn't be much slower than equivalent C/C++.
I'm writing an application where quite a bit of the computational time will be devoted to performing basic linear algebra operations (add, multiply, multiply by vector, multiply by scalar, etc.) on sparse matrices and vectors. Up to this point, we've built a prototype using C++ and the Boost matrix library. I'm considering switching to Python, to ease of coding the application itself, since it seems the Boost library (the easy C++ linear algebra library) isn't particularly fast anyway. This is a research/proof of concept application, so some reduction of run time speed is acceptable (as I assume C++ will almost always outperform Python) so long as coding time is also significantly decreased. Basically, I'm looking for general advice from people who have used these libraries before. But specifically: 1) I've found scipy.sparse and and pySparse. Are these (or other libraries) recommended? 2) What libraries beyond Boost are recommended for C++? I've seen a variety of libraries with C interfaces, but again I'm looking to do something with low complexity, if I can get relatively good performance. 3) Ultimately, will Python be somewhat comparable to C++ in terms of run time speed for the linear algebra operations? I will need to do many, many linear algebra operations and if the slowdown is significant then I probably shouldn't even try to make this switch. Thank you in advance for any help and previous experience you can relate.
0
1
3,064
0
3,762,759
0
0
1
0
2
false
4
2010-09-21T15:45:00.000
1
5
0
Python vs. C++ for an application that does sparse linear algebra
3,761,994
0.039979
c++,python,linear-algebra
Speed nowdays its no longer an issue for python since ctypes and cython emerged. Whats brilliant about cython is that your write python code and it generates c code without requiring from you to know a single line of c and then compiles to a library or you could even create a stanalone. Ctypes also is similar though abit slower. From the tests I have conducted cython code is as fast as c code and that make sense since cython code is translated to c code. Ctypes is abit slower. So in the end its a question of profiling , see what is slow in python and move it to cython, or you could wrap your existing c libraries for python with cython. Its quite easy to achieve c speeds this way. So I will recommend not to waste the effort you invested creating these c libraries , wrap them with cython and do the rest with python. Or you could do all of it with cython if you wish as cython is python bar some limitations. And even allows you to mix c code as well. So you could do part of it in c and part of it python/cython. Depending what makes you feel more comfortable. Numpy ans SciPy could be used as well for saving more time and providing ready to use solutions to your problems / needs.You should certainly check them out. Numpy has even has weaver a tool that let you inline c code inside your python code, just like you can inline assembly code inside your c code. But i think you would prefer to use cython . Remember because cython is both c and python at the same time it allows you to use directly c and python libraries.
I'm writing an application where quite a bit of the computational time will be devoted to performing basic linear algebra operations (add, multiply, multiply by vector, multiply by scalar, etc.) on sparse matrices and vectors. Up to this point, we've built a prototype using C++ and the Boost matrix library. I'm considering switching to Python, to ease of coding the application itself, since it seems the Boost library (the easy C++ linear algebra library) isn't particularly fast anyway. This is a research/proof of concept application, so some reduction of run time speed is acceptable (as I assume C++ will almost always outperform Python) so long as coding time is also significantly decreased. Basically, I'm looking for general advice from people who have used these libraries before. But specifically: 1) I've found scipy.sparse and and pySparse. Are these (or other libraries) recommended? 2) What libraries beyond Boost are recommended for C++? I've seen a variety of libraries with C interfaces, but again I'm looking to do something with low complexity, if I can get relatively good performance. 3) Ultimately, will Python be somewhat comparable to C++ in terms of run time speed for the linear algebra operations? I will need to do many, many linear algebra operations and if the slowdown is significant then I probably shouldn't even try to make this switch. Thank you in advance for any help and previous experience you can relate.
0
1
3,064
0
3,792,494
0
0
1
0
2
false
8
2010-09-25T04:12:00.000
2
5
0
Among MATLAB and Python, which one is good for statistical analysis?
3,792,465
0.07983
python,matlab,statistics,analysis
SciPy, NumPy and Matplotlib.
Which one among the two languages is good for statistical analysis? What are the pros and cons, other than accessibility, for each?
0
1
1,096
0
3,792,582
0
0
1
0
2
false
8
2010-09-25T04:12:00.000
3
5
0
Among MATLAB and Python, which one is good for statistical analysis?
3,792,465
0.119427
python,matlab,statistics,analysis
I would pick Python because it can be a powerful as Matlab but is free. Also, you can distribute your applications for free and no licensing chains. Matlab is awesome and expensive (it had a great statistical package) and it will glow smoother than Python in the beginning, but not so in the long run. Now, if you really want the best solution then check out R, the statistical package which is de facto in the community. They even have a Python port for it. R is also free software.
Which one among the two languages is good for statistical analysis? What are the pros and cons, other than accessibility, for each?
0
1
1,096
0
3,796,237
0
1
0
0
1
false
20
2010-09-25T17:25:00.000
2
3
0
Ruby generators vs Python generators
3,794,762
0.132549
python,ruby,generator,enumerator
Generators are stack based, Ruby's Enumerators are often specialised (at the interpreter level) and not stack based.
I've been researching the similarities/differences between Ruby and Python generators (known as Enumerators in Ruby), and so far as i can tell they're pretty much equivalent. However one difference i've noticed is that Python Generators support a close() method whereas Ruby Generators do not. From the Python docs the close() method is said to do the following: Raises a GeneratorExit at the point where the generator function was paused. If the generator function then raises StopIteration (by exiting normally, or due to already being closed) or GeneratorExit (by not catching the exception), close returns to its caller." Is there a good reason why Ruby Enumerators don't support the close() method? Or is it an accidental omission? I also discovered that Ruby Enumerators support a rewind() method yet Python generators do not...is there a reason for this too? Thanks
0
1
4,732
0
3,819,063
0
0
0
0
1
false
1
2010-09-29T05:10:00.000
0
2
0
using file/db as the buffer for very big numpy array to yield data prevent overflow?
3,818,881
0
python,memory-management,numpy
If You have matrices with lots of zeros use scipy.sparse.csc_matrix. It's possible to write everything, for example You can override numarray array class.
In using the numpy.darray, I met a memory overflow problem due to the size of data,for example: Suppose I have a 100000000 * 100000000 * 100000000 float64 array data source, when I want to read data and process it in memory with np. It will raise a Memoray Error because it works out all memory for storing such a big array in memory. Then maybe using a disk file / database as a buffer to store the array is a solution, when I want to use data, it will get the necessary data from the file / database, otherwise, it is just a python object take few memory. Is it possible write such a adapter? Thanks. Rgs, KC
0
1
337
1
3,855,400
0
0
0
0
1
false
1
2010-10-04T00:29:00.000
0
3
0
How to draw polygons with Point2D in wxPython?
3,852,146
0
python,wxpython
DC's only use integers. Try using Cairo or wx.GraphicsContext.
I have input values of x, y, z coordinates in the following format: [-11.235865 5.866001 -4.604924] [-11.262565 5.414276 -4.842384] [-11.291885 5.418229 -4.849229] [-11.235865 5.866001 -4.604924] I want to draw polygons and succeeded with making a list of wx.point objects. But I need to plot floating point coordinates so I had to change it to point2D objects but DrawPolygon doesn't seem to understand floating points, which returns error message: TypeError: Expected a sequence of length-2 sequences or wxPoints. I can't find anywhere in the API that can draw shapes based on point2D coordinates, could anyone tell me a function name will do the job? Thanks
0
1
1,320
0
3,859,736
0
1
1
0
1
false
20
2010-10-04T13:12:00.000
4
10
0
Fastest way to sort in Python
3,855,537
0.07983
python,arrays,performance,sorting
Radix sort theoretically runs in linear time (sort time grows roughly in direct proportion to array size ), but in practice Quicksort is probably more suited, unless you're sorting absolutely massive arrays. If you want to make quicksort a bit faster, you can use insertion sort] when the array size becomes small. It would probably be helpful to understand the concepts of algorithmic complexity and Big-O notation too.
What is the fastest way to sort an array of whole integers bigger than 0 and less than 100000 in Python? But not using the built in functions like sort. Im looking at the possibility to combine 2 sport functions depending on input size.
0
1
57,933
0
25,062,330
0
1
0
0
1
false
23
2010-10-07T22:19:00.000
3
3
0
Display array as raster image in python
3,886,281
0.197375
python,image,image-processing,numpy
Quick addition: for displaying with matplotlib, if you want the image to appear "raster", i.e., pixelized without smoothing, then you should include the option interpolation='nearest' in the call to imshow.
I've got a numpy array in Python and I'd like to display it on-screen as a raster image. What is the simplest way to do this? It doesn't need to be particularly fancy or have a nice interface, all I need to do is to display the contents of the array as a greyscale raster image. I'm trying to transition some of my IDL code to Python with NumPy and am basically looking for a replacement for the tv and tvscl commands in IDL.
0
1
41,992
0
18,177,268
0
0
0
0
1
false
13
2010-10-08T13:48:00.000
1
6
0
Select cells randomly from NumPy array - without replacement
3,891,180
0.033321
python,random,numpy,shuffle,sampling
people using numpy version 1.7 or later there can also use the builtin function numpy.random.choice
I'm writing some modelling routines in NumPy that need to select cells randomly from a NumPy array and do some processing on them. All cells must be selected without replacement (as in, once a cell has been selected it can't be selected again, but all cells must be selected by the end). I'm transitioning from IDL where I can find a nice way to do this, but I assume that NumPy has a nice way to do this too. What would you suggest? Update: I should have stated that I'm trying to do this on 2D arrays, and therefore get a set of 2D indices back.
0
1
15,426
0
3,893,461
0
0
0
0
1
false
4
2010-10-08T14:40:00.000
0
3
0
Clustering problem
3,891,645
0
python,algorithm,cluster-analysis,classification,nearest-neighbor
If your number of clusters is fixed and you only want to maximize the number of points that are in these clusters then I think a greedy solution would be good : find the rectangle that can contains the maximum number of points, remove these points, find the next rectangle ... So how to find the rectangle of maximum area A (in fact each rectangle will have this area) that contains the maximum number of points ? A rectangle is not really common for euclidean distance, before trying to solve this, could you precise if you really need rectangle or just some king of limit on the cluster size ? Would a circle/ellipse work ? EDIT : greedy will not work (see comment below) and it really need to be rectangles...
I've been tasked to find N clusters containing the most points for a certain data set given that the clusters are bounded by a certain size. Currently, I am attempting to do this by plugging in my data into a kd-tree, iterating over the data and finding its nearest neighbor, and then merging the points if the cluster they make does not exceed a limit. I'm not sure this approach will give me a global solution so I'm looking for ways to tweak it. If you can tell me what type of problem this would go under, that'd be great too.
0
1
1,423
0
3,934,387
0
1
0
0
1
false
0
2010-10-14T13:56:00.000
2
2
0
build recent numpy on recent ubuntu
3,933,923
0.197375
python,ubuntu,numpy
One way to try, which isn't guaranteed to work, but worth a shot is to see if uupdate can sucessfully update the package. Get a tarball of numpy 1.5. run "apt-get source numpy" which should fetch and unpack the current source from ubuntu. cd into this source directory and run "uupdate ../numpytarballname". This should update the old source package using the newer tarball. then you can try building with "apt-get build-dep numpy" and "dpkg-buildpackage -rfakeroot". This will require you have the build-essential and fakeroot packages installed.
How do I build numpy 1.5 on ubuntu 10.10? The instructions I found seems outdated or not clear. Thanks
0
1
1,563
0
4,572,785
0
0
0
0
2
false
9
2010-10-15T21:16:00.000
2
5
0
Migrating from Stata to Python
3,946,219
0.07983
python,statistics,numpy,stata
Use Rpy2 and call the R var package.
Some coworkers who have been struggling with Stata 11 are asking for my help to try to automate their laborious work. They mainly use 3 commands in Stata: tsset (sets a time series analysis) as in: tsset year_column, yearly varsoc (Obtain lag-order selection statistics for VARs) as in: varsoc column_a column_b vec (vector error-correction model) as in: vec column_a column_b, trend(con) lags(1) noetable Does anyone know any scientific library that I can use through python for this same functionality?
0
1
1,936
0
3,946,648
0
0
0
0
2
false
9
2010-10-15T21:16:00.000
0
5
0
Migrating from Stata to Python
3,946,219
0
python,statistics,numpy,stata
I have absolutely no clue what any of those do, but NumPy and SciPy. Maybe Sage or SymPy.
Some coworkers who have been struggling with Stata 11 are asking for my help to try to automate their laborious work. They mainly use 3 commands in Stata: tsset (sets a time series analysis) as in: tsset year_column, yearly varsoc (Obtain lag-order selection statistics for VARs) as in: varsoc column_a column_b vec (vector error-correction model) as in: vec column_a column_b, trend(con) lags(1) noetable Does anyone know any scientific library that I can use through python for this same functionality?
0
1
1,936
0
3,964,945
0
0
0
0
1
false
15
2010-10-17T20:12:00.000
1
5
0
Image analysis in R
3,955,077
0.039979
python,image,r,analysis
Try the rgdal package. You will be able to read (import) and write (export) GeoTiff image files from/to R. Marcio Pupin Mello
I would like to know how I would go about performing image analysis in R. My goal is to convert images into matrices (pixel-wise information), extract/quantify color, estimate the presence of shapes and compare images based on such metrics/patterns. I am aware of relevant packages available in Python (suggestions relevant to Python are also welcome), but I am looking to accomplish these tasks in R. Thank you for your feedback. -Harsh
0
1
4,953
0
3,993,156
0
0
0
0
1
true
7
2010-10-22T00:50:00.000
7
3
0
What does ... mean in numpy code?
3,993,125
1.2
python,numpy
Yes, you're right. It fills in as many : as required. The only difference occurs when you use multiple ellipses. In that case, the first ellipsis acts in the same way, but each remaining one is converted to a single :.
And what is it called? I don't know how to search for it; I tried calling it ellipsis with the Google. I don't mean in interactive output when dots are used to indicate that the full array is not being shown, but as in the code I'm looking at, xTensor0[...] = xVTensor[..., 0] From my experimentation, it appears to function the similarly to : in indexing, but stands in for multiple :'s, making x[:,:,1] equivalent to x[...,1].
0
1
1,449
0
53,562,948
0
0
0
0
1
false
1,721
2010-10-22T12:48:00.000
3
22
0
Generate random integers between 0 and 9
3,996,904
0.027266
python,random,integer
This is more of a mathematical approach but it works 100% of the time: Let's say you want to use random.random() function to generate a number between a and b. To achieve this, just do the following: num = (b-a)*random.random() + a; Of course, you can generate more numbers.
How can I generate random integers between 0 and 9 (inclusive) in Python? For example, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
0
1
2,497,091
0
4,004,384
0
0
0
0
3
false
5
2010-10-23T11:58:00.000
0
4
0
How to build a conceptual search engine?
4,003,840
0
python,search,lucene,nlp,lsa
First , write a piece of python code that will return you pineapple , orange , papaya when you input apple. By focusing on "is" relation of semantic network. Then continue with has a relationship and so on. I think at the end , you might get a fairly sufficient piece of code for a school project.
I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for "big cats", I would want highly ranked results to return documents with "large cats" as well. But I may also be interested in having it return "huge animals", albeit at a much lower relevancy score. I'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? From further research, it seems "latent semantic analysis" is relevant to what I'm looking for but I'm not sure how to implement it. Any advice on how to get this done?
0
1
1,997
0
4,004,024
0
0
0
0
3
false
5
2010-10-23T11:58:00.000
1
4
0
How to build a conceptual search engine?
4,003,840
0.049958
python,search,lucene,nlp,lsa
This is an incredibly hard problem and it can't be solved in a way that would always produce adequate results. I'd suggest to stick to some very simple principles instead so that the results are at least predictable. I think you need 2 things: some basic morphology engine plus a dictionary of synonyms. Whenever a search query arrives, for each word you Look for a literal match "Normalize/canonicalze" the word using the morphology engine, i.e. make it singular, first form, etc and look for matches Look for synonyms of the word Then repeat for all combinations of the input words, i.e. "big cats", "big cat", "huge cats" huge cat" etc. In fact, you need to store your index data in canonical form, too (singluar, first form etc) along with the literal form. As for concepts, such as cats are also animals - this is where it gets tricky. It never really worked, because otherwise Google would have been returning conceptual matches already, but it's not doing that.
I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for "big cats", I would want highly ranked results to return documents with "large cats" as well. But I may also be interested in having it return "huge animals", albeit at a much lower relevancy score. I'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? From further research, it seems "latent semantic analysis" is relevant to what I'm looking for but I'm not sure how to implement it. Any advice on how to get this done?
0
1
1,997
0
4,004,314
0
0
0
0
3
true
5
2010-10-23T11:58:00.000
9
4
0
How to build a conceptual search engine?
4,003,840
1.2
python,search,lucene,nlp,lsa
I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? Step 1. Stop. Step 2. Get something to work. Step 3. By then, you'll understand more about Python and Lucene and other tools and ways you might integrate them. Don't start by trying to solve integration problems. Software can always be integrated. That's what an Operating System does. It integrates software. Sometimes you want "tighter" integration, but that's never the first problem to solve. The first problem to solve is to get your search or concept thing or whatever it is to work as a dumb-old command-line application. Or pair of applications knit together by passing files around or knit together with OS pipes or something. Later, you can try and figure out how to make the user experience seamless. But don't start with integration and don't stall because of integration questions. Set integration aside and get something to work.
I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for "big cats", I would want highly ranked results to return documents with "large cats" as well. But I may also be interested in having it return "huge animals", albeit at a much lower relevancy score. I'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? From further research, it seems "latent semantic analysis" is relevant to what I'm looking for but I'm not sure how to implement it. Any advice on how to get this done?
0
1
1,997
0
4,023,046
0
0
0
0
3
false
8
2010-10-26T10:42:00.000
0
6
0
Pytables vs. CSV for files that are not very large
4,022,887
0
python,csv,pytables
These are not "exclusive" choices. You need both. CSV is just a data exchange format. If you use pytables, you still need to import and export in CSV format.
I recently came across Pytables and find it to be very cool. It is clear that they are superior to a csv format for very large data sets. I am running some simulations using python. The output is not so large, say 200 columns and 2000 rows. If someone has experience with both, can you suggest which format would be more convenient in the long run for such data sets that are not very large. Pytables has data manipulation capabilities and browsing of the data with Vitables, but the browser does not have as much functionality as, say Excel, which can be used for CSV. Similarly, do you find one better than the other for importing and exporting data, if working mainly in python? Is one more convenient in terms of file organization? Any comments on issues such as these would be helpful. Thanks.
0
1
3,418
0
7,753,331
0
0
0
0
3
false
8
2010-10-26T10:42:00.000
2
6
0
Pytables vs. CSV for files that are not very large
4,022,887
0.066568
python,csv,pytables
One big plus for PyTables is the storage of metadata, like variables etc. If you run the simulations more often with different parameters you the store the results as an array entry in the h5 file. We use it to store measurement data + experiment scripts to get the data so it is all self contained. BTW: If you need to look quickly into a hdf5 file you can use HDFView. It's a Java app for free from the HDFGroup. It's easy to install.
I recently came across Pytables and find it to be very cool. It is clear that they are superior to a csv format for very large data sets. I am running some simulations using python. The output is not so large, say 200 columns and 2000 rows. If someone has experience with both, can you suggest which format would be more convenient in the long run for such data sets that are not very large. Pytables has data manipulation capabilities and browsing of the data with Vitables, but the browser does not have as much functionality as, say Excel, which can be used for CSV. Similarly, do you find one better than the other for importing and exporting data, if working mainly in python? Is one more convenient in terms of file organization? Any comments on issues such as these would be helpful. Thanks.
0
1
3,418
0
4,024,016
0
0
0
0
3
false
8
2010-10-26T10:42:00.000
1
6
0
Pytables vs. CSV for files that are not very large
4,022,887
0.033321
python,csv,pytables
i think its very hard to comapre pytables and csv.. pyTable is a datastructure ehile CSV is an exchange format for data.
I recently came across Pytables and find it to be very cool. It is clear that they are superior to a csv format for very large data sets. I am running some simulations using python. The output is not so large, say 200 columns and 2000 rows. If someone has experience with both, can you suggest which format would be more convenient in the long run for such data sets that are not very large. Pytables has data manipulation capabilities and browsing of the data with Vitables, but the browser does not have as much functionality as, say Excel, which can be used for CSV. Similarly, do you find one better than the other for importing and exporting data, if working mainly in python? Is one more convenient in terms of file organization? Any comments on issues such as these would be helpful. Thanks.
0
1
3,418
0
4,066,155
0
0
0
0
1
false
6
2010-11-01T01:16:00.000
2
4
0
Proper data structure to represent a Sudoku puzzle?
4,066,075
0.099668
python,data-structures,graph,sudoku
Others have reasonably suggested simply using a 2D array. I note that a 2D array in most language implementations (anything in which that is implemented as "array of array of X" suffers from additional access time overhead (one access to the top level array, a second to the subarray). I suggest you implement the data structure abstractly as a 2D array (perhaps even continuing to use 2 indexes), but implement the array as single block of 81 cells, indexed classically by i*9+j. This gives you conceptual clarity, and somewhat more efficient implementation, by avoiding that second memory access. You should be able to hide the 1D array access behind setters and getters that take 2D indexes. If your language has the capability (dunno if this is true for Python), such small methods can be inlined for additional speed.
What would be a smart data structure to use to represent a Sudoku puzzle? I.e. a 9X9 square where each "cell" contains either a number or a blank. Special considerations include: Ability to compare across row, column, and in 3X3 "group Ease of implementation (specifically in Python) Efficiency (not paramount) I suppose in a pinch, a 2D array might work but that seems to be a less than elegant solution. I just would like to know if there's a better data structure.
0
1
7,767
0
4,072,921
0
0
0
0
1
false
7
2010-11-01T20:37:00.000
0
3
0
Add more sample points to data
4,072,844
0
python,numpy,scipy
If your application is not sensitive to precision or you just want a quick overview, you could just fill the unknown data points with averages from neighbouring known data points (in other words, do naive linear interpolation).
Given some data of shape 20x45, where each row is a separate data set, say 20 different sine curves with 45 data points each, how would I go about getting the same data, but with shape 20x100? In other words, I have some data A of shape 20x45, and some data B of length 20x100, and I would like to have A be of shape 20x100 so I can compare them better. This is for Python and Numpy/Scipy. I assume it can be done with splines, so I am looking for a simple example, maybe just 2x10 to 2x20 or something, where each row is just a line, to demonstrate the solution. Thanks!
0
1
5,913
0
4,098,941
0
0
1
0
4
false
2
2010-11-04T15:51:00.000
1
6
0
Collecting, storing, and retrieving large amounts of numeric data
4,098,509
0.033321
java,c++,python,storage,simulation
Using D-Bus format to send the information may be to your advantage. The format is standard, binary, and D-Bus is implemented in multiple languages, and can be used to send both over the network and inter-process on the same machine.
I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?) (1) Optimize for disk space, I/O speed, or memory? For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited. (2) Store text, or something else (binary numeric)? (3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else? I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions. Edit: this is for a Linux system (Ubuntu) Thanks
0
1
2,032
0
4,098,550
0
0
1
0
4
false
2
2010-11-04T15:51:00.000
0
6
0
Collecting, storing, and retrieving large amounts of numeric data
4,098,509
0
java,c++,python,storage,simulation
If you are just storing, then use system tools. Don't write your own. If you need to do some real-time processing of the data before it is stored, then that's something completely different.
I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?) (1) Optimize for disk space, I/O speed, or memory? For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited. (2) Store text, or something else (binary numeric)? (3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else? I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions. Edit: this is for a Linux system (Ubuntu) Thanks
0
1
2,032
0
4,098,613
0
0
1
0
4
false
2
2010-11-04T15:51:00.000
1
6
0
Collecting, storing, and retrieving large amounts of numeric data
4,098,509
0.033321
java,c++,python,storage,simulation
Actually, this is quite similar to what I'm doing, which is monitoring changes players make to the world in a game. I'm currently using an sqlite database with python. At the start of the program, I load the disk database into memory, for fast writing procedures. Each change is put in to two lists. These lists are for both the memory database and the disk database. Every x or so updates, the memory database is updated, and a counter is pushed up one. This is repeated, and when the counter equals 5, it's reset and the list with changes for the disk is flushed to the disk database and the list is cleared.I have found this works well if I also set the writing more to WOL(Write Ahead Logging). This method can stand about 100-300 updates a second if I update memory every 100 updates and the disk counter is set to update every 5 memory updates. You should probobly choose binary, sense, unless you have faults in your data sources, would be most logical
I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?) (1) Optimize for disk space, I/O speed, or memory? For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited. (2) Store text, or something else (binary numeric)? (3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else? I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions. Edit: this is for a Linux system (Ubuntu) Thanks
0
1
2,032
0
4,098,582
0
0
1
0
4
false
2
2010-11-04T15:51:00.000
3
6
0
Collecting, storing, and retrieving large amounts of numeric data
4,098,509
0.099668
java,c++,python,storage,simulation
Optimizing for disk space and IO speed is the same thing - these days, CPUs are so fast compared to IO that it's often overall faster to compress data before storing it (you may actually want to do that). I don't really see memory playing a big role (though you should probably use a reasonably-sized buffer to ensure you're doing sequential writes). Binary is more compact (and thus faster). Given the amount of data, I doubt whether being human-readable has any value. The only advantage of a text format would be that it's easier to figure out and correct if it gets corrupted or you lose the parsing code.
I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?) (1) Optimize for disk space, I/O speed, or memory? For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited. (2) Store text, or something else (binary numeric)? (3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else? I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions. Edit: this is for a Linux system (Ubuntu) Thanks
0
1
2,032
0
4,101,917
0
0
0
0
1
false
1
2010-11-04T22:01:00.000
0
2
1
Hadoop/Elastic Map Reduce with binary executable?
4,101,815
0
python,matlab,amazon-web-services,hadoop,mapreduce
The following is not exactly an answer to your Hadoop question, but I couldn't resist not asking why you don't execute your processing jobs on the Grid resources? There are proven solutions for executing compute intensive workflows on the Grid. And as far as I know matlab runtime environment is usually available on these resources. You may also consider using the Grid especially if you are in academia. Good luck
I am writing and distributed image processing application using hadoop streaming, python, matlab, and elastic map reduce. I have compiled a binary executable of my matlab code using the matlab compiler. I am wondering how I can incorporate this into my workflow so the binary is part of the processing on Amazon's elastic map reduce? It looks like I have to use the Hadoop Distributed Cache? The code is very complicated (and not written by me) so porting it to another language is not possible right now. THanks
0
1
1,138
0
4,122,980
0
0
0
1
2
true
15
2010-11-08T10:00:00.000
34
2
0
Csv blank rows problem with Excel
4,122,794
1.2
python,excel,csv
You're using open('file.csv', 'w')--try open('file.csv', 'wb'). The Python csv module requires output files be opened in binary mode.
I have a csv file which contains rows from a sqlite3 database. I wrote the rows to the csv file using python. When I open the csv file with Ms Excel, a blank row appears below every row, but the file on notepad is fine(without any blanks). Does anyone know why this is happenning and how I can fix it? Edit: I used the strip() function for all the attributes before writing a row. Thanks.
0
1
7,209
0
4,122,816
0
0
0
1
2
false
15
2010-11-08T10:00:00.000
0
2
0
Csv blank rows problem with Excel
4,122,794
0
python,excel,csv
the first that comes into my mind (just an idea) is that you might have used "\r\n" as row delimiter (which is shown as one linebrak in notepad) but excel expects to get only "\n" or only "\r" and so it interprets this as two line-breaks.
I have a csv file which contains rows from a sqlite3 database. I wrote the rows to the csv file using python. When I open the csv file with Ms Excel, a blank row appears below every row, but the file on notepad is fine(without any blanks). Does anyone know why this is happenning and how I can fix it? Edit: I used the strip() function for all the attributes before writing a row. Thanks.
0
1
7,209
0
53,662,588
0
0
0
0
3
false
62
2010-11-09T03:49:00.000
0
11
0
python matplotlib framework under macosx?
4,130,355
0
python,macos,matplotlib,fink
Simply aliasing a new command to launch python in ~/.bash_profile will do the trick. alias vpython3=/Library/Frameworks/Python.framework/Versions/3.6(replace with your own python version)/bin/python3 then 'source ~/.bash_profile' and use vpython3 to launch python3. Explanation: Python is actually by default installed as framework on mac, but using virtualenv would link your python3 command under the created virtual environment, instead of the above framework directory ('which python3' in terminal and you'll see that). Perhaps Matplotlib has to find the bin/ include/ lib/,etc in the python framework.
I am getting this error: /sw/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py:235: UserWarning: Python is not installed as a framework. The MacOSX backend may not work correctly if Python is not installed as a framework. Please see the Python documentation for more information on installing Python as a framework on Mac OS X I installed python27 using fink and it's using the default matplotlib is using macosx framework.
0
1
25,060
0
4,131,726
0
0
0
0
3
true
62
2010-11-09T03:49:00.000
18
11
0
python matplotlib framework under macosx?
4,130,355
1.2
python,macos,matplotlib,fink
There are two ways Python can be built and installed on Mac OS X. One is as a traditional flat Unix-y shared library. The other is known as a framework install, a file layout similar to other frameworks on OS X where all of the component directories (include, lib, bin) for the product are installed as subdirectories under the main framework directory. The Fink project installs Pythons using the Unix shared library method. Most other distributors, including the Apple-supplied Pythons in OS X, the python.org installers, and the MacPorts project, install framework versions of Python. One of the advantages of a framework installation is that it will work properly with various OS X API calls that require a window manager connection (generally GUI-related interfaces) because the Python interpreter is packaged as an app bundle within the framework. If you do need the functions in matplotlib that require the GUI functions, the simplest approach may be to switch to MacPorts which also packages matplotlib (port py27-matplotlib) and its dependencies. If so, be careful not to mix packages between Fink and MacPorts. It's best to stick with one or the other unless you are really careful. Adjust your shell path accordingly; it would be safest to remove all of the Fink packages and install MacPorts versions.
I am getting this error: /sw/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py:235: UserWarning: Python is not installed as a framework. The MacOSX backend may not work correctly if Python is not installed as a framework. Please see the Python documentation for more information on installing Python as a framework on Mac OS X I installed python27 using fink and it's using the default matplotlib is using macosx framework.
0
1
25,060
0
33,873,802
0
0
0
0
3
false
62
2010-11-09T03:49:00.000
31
11
0
python matplotlib framework under macosx?
4,130,355
1
python,macos,matplotlib,fink
Optionally you could use the Agg backend which requires no extra installation of anything. Just put backend : Agg into ~/.matplotlib/matplotlibrc
I am getting this error: /sw/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py:235: UserWarning: Python is not installed as a framework. The MacOSX backend may not work correctly if Python is not installed as a framework. Please see the Python documentation for more information on installing Python as a framework on Mac OS X I installed python27 using fink and it's using the default matplotlib is using macosx framework.
0
1
25,060
0
30,009,455
0
0
0
0
1
false
2
2010-11-09T11:51:00.000
0
3
0
HDF5 : storing NumPy data
4,133,327
0
python,c,numpy,hdf5,pytables
HDF5 takes care of binary compatibility of structures for you. You simply have to tell it what your structs consist of (dtype) and you'll have no problems saving/reading record arrays - this is because the type system is basically 1:1 between numpy and HDF5. If you use H5py I'm confident to say the IO should be fast enough provided you use all native types and large batched reads/writes - the entire dataset of allowable. After that it depends on chunking and what filters (shuffle, compression for example) - it's also worth noting sometimes those can speed up by greatly reducing file size so always look at benchmarks. Note that the the type and filter choices are made on the end creating the HDF5 document. If you're trying to parse HDF5 yourself, you're doing it wrong. Use the C++ and C apis if you're working in C++/C. There are examples of so called "compound types" on the HDF5 groups website.
when I used NumPy I stored it's data in the native format *.npy. It's very fast and gave me some benefits, like this one I could read *.npy from C code as simple binary data(I mean *.npy are binary-compatibly with C structures) Now I'm dealing with HDF5 (PyTables at this moment). As I read in the tutorial, they are using NumPy serializer to store NumPy data, so I can read these data from C as from simple *.npy files? Does HDF5's numpy are binary-compatibly with C structures too? UPD : I have matlab client reading from hdf5, but don't want to read hdf5 from C++ because reading binary data from *.npy is times faster, so I really have a need in reading hdf5 from C++ (binary-compatibility) So I'm already using two ways for transferring data - *.npy (read from C++ as bytes,from Python natively) and hdf5 (access from Matlab) And if it's possible,want to use the only one way - hdf5, but to do this I have to find a way to make hdf5 binary-compatibly with C++ structures, pls help, If there is some way to turn-off compression in hdf5 or something else to make hdf5 binary-compatibly with C++ structures - tell me where i can read about it...
0
1
6,260
0
4,156,445
0
1
0
0
2
false
0
2010-11-11T00:32:00.000
2
3
0
Extract different POS words for a given word in python nltk
4,150,443
0.132549
python,nltk
There are two options i can think of off the top of my head: Option one is to iterate over the sample POS-tagged corpora and simply build this mapping yourself. This gives you the POS tags that are associated with a particular word in the corpora. Option two is to build a hidden markov model POS tagger on the corpora, then inspect the values of the model. This gives you the POS tags that are associated with a particular word in the corpora plus their a priori probabilities, as well as some other statistical data. Depending on what your use-case is, one may be better than the other. I would start with option one, since it's fast and easy.
Is there any package in python nltk that can produce all different parts of speech words for a given word. For example if i give add(verb) then it must produce addition(noun),additive(adj) and so on. Can anyone let me know?
0
1
430
0
4,155,066
0
1
0
0
2
false
0
2010-11-11T00:32:00.000
0
3
0
Extract different POS words for a given word in python nltk
4,150,443
0
python,nltk
NLTK has a lot of clever things hiding away, so there might be a direct way of doing it. However, I think you may have to write your own code to work with the WordNet database.
Is there any package in python nltk that can produce all different parts of speech words for a given word. For example if i give add(verb) then it must produce addition(noun),additive(adj) and so on. Can anyone let me know?
0
1
430
0
4,151,409
0
0
0
0
2
false
6
2010-11-11T02:26:00.000
0
3
0
Where do two 2-D arrays begin to overlap each other?
4,150,909
0
python,multidimensional-array,numpy,subdomain,overlap
Can you say more? What model are you using? What are you modelling? How is it computed? Can you make the dimensions match to avoid the fit? (i.e. if B doesn't depend on all of A, only plug in the part of A that B models, or compute boring values for the parts of B that wouldn't overlap A and drop those values later)
I'm working with model output at the moment, and I can't seem to come up with a nice way of combining two arrays of data. Arrays A and B store different data, and the entries in each correspond to some spatial (x,y) point -- A holds some parameter, and B holds model output. The problem is that B is a spatial subsection of A -- that is, if the model were for the entire world, A would store the parameter at each point on the earth, and B would store the model output only for those points in Africa. So I need to find how much B is offset from A -- put another way, I need to find the indexes at which they start to overlap. So if A.shape=(1000,1500), is B the (750:850, 200:300) part of that, or the (783:835, 427:440) subsection? I have arrays associated with both A and B which store the (x,y) positions of the gridpoints for each. This would seem to be a simple problem -- find where the two arrays overlap. And I can solve it with scipy.spatial's KDTree simply enough, but it's very slow. Anyone have any better ideas?
0
1
1,367
0
4,191,918
0
0
0
0
2
false
6
2010-11-11T02:26:00.000
0
3
0
Where do two 2-D arrays begin to overlap each other?
4,150,909
0
python,multidimensional-array,numpy,subdomain,overlap
I need to find the indexes at which they start to overlap So are you looking for indexes from A or from B? And is B strictly rectangular? Finding the bounding box or convex hull of B is really cheap.
I'm working with model output at the moment, and I can't seem to come up with a nice way of combining two arrays of data. Arrays A and B store different data, and the entries in each correspond to some spatial (x,y) point -- A holds some parameter, and B holds model output. The problem is that B is a spatial subsection of A -- that is, if the model were for the entire world, A would store the parameter at each point on the earth, and B would store the model output only for those points in Africa. So I need to find how much B is offset from A -- put another way, I need to find the indexes at which they start to overlap. So if A.shape=(1000,1500), is B the (750:850, 200:300) part of that, or the (783:835, 427:440) subsection? I have arrays associated with both A and B which store the (x,y) positions of the gridpoints for each. This would seem to be a simple problem -- find where the two arrays overlap. And I can solve it with scipy.spatial's KDTree simply enough, but it's very slow. Anyone have any better ideas?
0
1
1,367
0
4,156,169
0
1
0
0
2
false
1
2010-11-11T15:19:00.000
1
4
0
moving on from python
4,155,955
0.049958
python,programming-languages
If you just want to learn a new language you could take a look at scala. The language is influenced by languages like ruby, python and erlang, but is staticaly typed and runs on the JVM. The speed is comparable to Java. And you can use all the java libraries, plus reuse a lot of your python code through jython.
I use python heavily for manipulating data and then packaging it for statistical modeling (R through RPy2). Feeling a little restless, I would like to branch out into other languages where Faster than python It's free There's good books, documentations and tutorials Very suitable for data manipulation Lots of libraries for statistical modeling Any recommendations?
0
1
210
0
4,157,423
0
1
0
0
2
false
1
2010-11-11T15:19:00.000
1
4
0
moving on from python
4,155,955
0.049958
python,programming-languages
I didn't see you mention SciPy on your list... I tend like R syntax better, but they cover much of the same ground. SciPy has faster matrix and array structures than the general purpose Python ones. Mostly places where I have wanted to use Cython, SciPy has been just as easy / fast. GNU/Octave is an open/free version of Matlab which might also interest you.
I use python heavily for manipulating data and then packaging it for statistical modeling (R through RPy2). Feeling a little restless, I would like to branch out into other languages where Faster than python It's free There's good books, documentations and tutorials Very suitable for data manipulation Lots of libraries for statistical modeling Any recommendations?
0
1
210
0
4,158,455
0
0
0
0
1
true
51
2010-11-11T19:19:00.000
72
3
0
more than 9 subplots in matplotlib
4,158,367
1.2
python,charts,matplotlib
It was easier than I expected, I just did: pylab.subplot(4,4,10) and it worked.
Is it possible to get more than 9 subplots in matplotlib? I am on the subplots command pylab.subplot(449); how can I get a 4410 to work? Thank you very much.
0
1
28,836
0
4,215,056
0
0
0
0
2
true
25
2010-11-18T12:44:00.000
13
8
0
An example using python bindings for SVM library, LIBSVM
4,214,868
1.2
python,machine-learning,svm,libsvm
LIBSVM reads the data from a tuple containing two lists. The first list contains the classes and the second list contains the input data. create simple dataset with two possible classes you also need to specify which kernel you want to use by creating svm_parameter. >> from libsvm import * >> prob = svm_problem([1,-1],[[1,0,1],[-1,0,-1]]) >> param = svm_parameter(kernel_type = LINEAR, C = 10) ## training the model >> m = svm_model(prob, param) #testing the model >> m.predict([1, 1, 1])
I am in dire need of a classification task example using LibSVM in python. I don't know how the Input should look like and which function is responsible for training and which one for testing Thanks
0
1
50,030
0
8,302,624
0
0
0
0
2
false
25
2010-11-18T12:44:00.000
3
8
0
An example using python bindings for SVM library, LIBSVM
4,214,868
0.07486
python,machine-learning,svm,libsvm
Adding to @shinNoNoir : param.kernel_type represents the type of kernel function you want to use, 0: Linear 1: polynomial 2: RBF 3: Sigmoid Also have in mind that, svm_problem(y,x) : here y is the class labels and x is the class instances and x and y can only be lists,tuples and dictionaries.(no numpy array)
I am in dire need of a classification task example using LibSVM in python. I don't know how the Input should look like and which function is responsible for training and which one for testing Thanks
0
1
50,030
0
4,263,022
0
0
1
0
2
false
12
2010-11-24T02:31:00.000
0
5
0
Data analysis using R/python and SSDs
4,262,984
0
python,r,data-analysis,solid-state-drive
The read and write times for SSDs are significantly higher than standard 7200 RPM disks (it's still worth it with a 10k RPM disk, not sure how much of an improvement it is over a 15k). So, yes, you'd get much faster times on data access. The performance improvement is undeniable. Then, it's a question of economics. 2TB 7200 RPM disks are $170 a piece, and 100GB SSDS cost $210. So if you have a lot of data, you may run into a problem. If you read/write a lot of data, get an SSD. If the application is CPU intensive, however, you'd benefit much more from getting a better processor.
Does anyone have any experience using r/python with data stored in Solid State Drives. If you are doing mostly reads, in theory this should significantly improve the load times of large datasets. I want to find out if this is true and if it is worth investing in SSDs for improving the IO rates in data intensive applications.
0
1
4,485
0
4,264,161
0
0
1
0
2
false
12
2010-11-24T02:31:00.000
2
5
0
Data analysis using R/python and SSDs
4,262,984
0.07983
python,r,data-analysis,solid-state-drive
I have to second John's suggestion to profile your application. My experience is that it isn't the actual data reads that are the slow part, it's the overhead of creating the programming objects to contain the data, casting from strings, memory allocation, etc. I would strongly suggest you profile your code first, and consider using alternative libraries (like numpy) to see what improvements you can get before you invest in hardware.
Does anyone have any experience using r/python with data stored in Solid State Drives. If you are doing mostly reads, in theory this should significantly improve the load times of large datasets. I want to find out if this is true and if it is worth investing in SSDs for improving the IO rates in data intensive applications.
0
1
4,485
0
5,586,430
0
0
0
0
1
false
8
2010-11-25T02:03:00.000
2
2
0
Using SlopeOne algorithm to predict if a gamer can complete a level in a Game?
4,273,169
0.197375
python,algorithm,filtering,prediction,collaborative
I think it might work, but I would apply log to the number of tries (you can't do log(0) so retries won't work) first. If someone found a level easy they would try it once or twice, whereas people who found it hard would generally have to do it over and over again. The difference between did it in 1 go vs 2 goes is much greater than 20 goes vs 21 goes. This would remove the need to place an arbitrary limit on the number of goes value.
I am planning to use SlopeOne algorithm to predict if a gamer can complete a given level in a Game or not? Here is the scenario: Lots of Gamers play and try to complete 100 levels in the game. Each gamer can play a level as many times as they want until they cross the level. The system keeps track of the level and the number of ReTries for each level. Each Game Level falls into one of the 3 categories (Easy, Medium, Hard) Approximate distribution of the levels is 33% across each category meaning 33% of the levels are Easy, 33% of the levels are Hard etc. Using this information: When a new gamer starts playing the game, after a few levels, I want to be able to predict which level can the Gamer Cross easily and which levels can he/she not cross easily. with this predictive ability I would like to present the game levels that the user would be able to cross with 50% probability. Can I use SlopeOne algorithm for this? Reasoning is I see a lot of similarities between what I want to with say a movie rating system. n users, m items and N ratings to predict user rating for a given item. Similarly, in my case, I have n users, m levels and N Retries ... The only difference being in a movie rating system the rating is fixed on a 1-5 scale and in my case the retries can range from 1-x (x could be as high as 30) while theoretically someone could retry more 30 times, for now I could start with fixing the upper limit at 30 and adjust after I have more data. Thanks.
0
1
431
0
4,273,543
0
0
0
0
2
false
50
2010-11-25T03:18:00.000
4
5
0
Reversible hash function?
4,273,466
0.158649
python,hash
Why not just XOR with a nice long number? Easy. Fast. Reversible. Or, if this doesn't need to be terribly secure, you could convert from base 10 to some smaller base (like base 8 or base 4, depending on how long you want the numbers to be).
I need a reversible hash function (obviously the input will be much smaller in size than the output) that maps the input to the output in a random-looking way. Basically, I want a way to transform a number like "123" to a larger number like "9874362483910978", but not in a way that will preserve comparisons, so it must not be always true that, if x1 > x2, f(x1) > f(x2) (but neither must it be always false). The use case for this is that I need to find a way to transform small numbers into larger, random-looking ones. They don't actually need to be random (in fact, they need to be deterministic, so the same input always maps to the same output), but they do need to look random (at least when base64encoded into strings, so shifting by Z bits won't work as similar numbers will have similar MSBs). Also, easy (fast) calculation and reversal is a plus, but not required. I don't know if I'm being clear, or if such an algorithm exists, but I'd appreciate any and all help!
0
1
47,252
0
4,274,259
0
0
0
0
2
false
50
2010-11-25T03:18:00.000
19
5
0
Reversible hash function?
4,273,466
1
python,hash
What you are asking for is encryption. A block cipher in its basic mode of operation, ECB, reversibly maps a input block onto an output block of the same size. The input and output blocks can be interpreted as numbers. For example, AES is a 128 bit block cipher, so it maps an input 128 bit number onto an output 128 bit number. If 128 bits is good enough for your purposes, then you can simply pad your input number out to 128 bits, transform that single block with AES, then format the output as a 128 bit number. If 128 bits is too large, you could use a 64 bit block cipher, like 3DES, IDEA or Blowfish. ECB mode is considered weak, but its weakness is the constraint that you have postulated as a requirement (namely, that the mapping be "deterministic"). This is a weakness, because once an attacker has observed that 123 maps to 9874362483910978, from then on whenever she sees the latter number, she knows the plaintext was 123. An attacker can perform frequency analysis and/or build up a dictionary of known plaintext/ciphertext pairs.
I need a reversible hash function (obviously the input will be much smaller in size than the output) that maps the input to the output in a random-looking way. Basically, I want a way to transform a number like "123" to a larger number like "9874362483910978", but not in a way that will preserve comparisons, so it must not be always true that, if x1 > x2, f(x1) > f(x2) (but neither must it be always false). The use case for this is that I need to find a way to transform small numbers into larger, random-looking ones. They don't actually need to be random (in fact, they need to be deterministic, so the same input always maps to the same output), but they do need to look random (at least when base64encoded into strings, so shifting by Z bits won't work as similar numbers will have similar MSBs). Also, easy (fast) calculation and reversal is a plus, but not required. I don't know if I'm being clear, or if such an algorithm exists, but I'd appreciate any and all help!
0
1
47,252
0
4,323,638
0
0
0
0
1
true
41
2010-12-01T08:45:00.000
29
2
0
How to get started with Big Data Analysis
4,322,559
1.2
python,r,hadoop,bigdata
Using the Python Disco project for example. Good. Play with that. Using the RHIPE package and finding toy datasets and problem areas. Fine. Play with that, too. Don't sweat finding "big" datasets. Even small datasets present very interesting problems. Indeed, any dataset is a starting-off point. I once built a small star-schema to analyze the $60M budget of an organization. The source data was in spreadsheets, and essentially incomprehensible. So I unloaded it into a star schema and wrote several analytical programs in Python to create simplified reports of the relevant numbers. Finding the right information to allow me to decide if I need to move to NoSQL from RDBMS type databases This is easy. First, get a book on data warehousing (Ralph Kimball's The Data Warehouse Toolkit) for example. Second, study the "Star Schema" carefully -- particularly all the variants and special cases that Kimball explains (in depth) Third, realize the following: SQL is for Updates and Transactions. When doing "analytical" processing (big or small) there's almost no update of any kind. SQL (and related normalization) don't really matter much any more. Kimball's point (and others, too) is that most of your data warehouse is not in SQL, it's in simple Flat Files. A data mart (for ad-hoc, slice-and-dice analysis) may be in a relational database to permit easy, flexible processing with SQL. So the "decision" is trivial. If it's transactional ("OLTP") it must be in a Relational or OO DB. If it's analytical ("OLAP") it doesn't require SQL except for slice-and-dice analytics; and even then the DB is loaded from the official files as needed.
I've been a long time user of R and have recently started working with Python. Using conventional RDBMS systems for data warehousing, and R/Python for number-crunching, I feel the need now to get my hands dirty with Big Data Analysis. I'd like to know how to get started with Big Data crunching. - How to start simple with Map/Reduce and the use of Hadoop How can I leverage my skills in R and Python to get started with Big Data analysis. Using the Python Disco project for example. Using the RHIPE package and finding toy datasets and problem areas. Finding the right information to allow me to decide if I need to move to NoSQL from RDBMS type databases All in all, I'd like to know how to start small and gradually build up my skills and know-how in Big Data Analysis. Thank you for your suggestions and recommendations. I apologize for the generic nature of this query, but I'm looking to gain more perspective regarding this topic. Harsh
0
1
18,227
0
19,205,464
0
0
0
0
2
false
7
2010-12-01T20:33:00.000
1
4
0
Generating a graph with certain degree distribution?
4,328,837
0.049958
python,algorithm,r,graph,networkx
I know this is very late, but you can do the same thing, albeit a little more straightforward, with mathematica. RandomGraph[DegreeGraphDistribution[{3, 3, 3, 3, 3, 3, 3, 3}], 4] This will generate 4 random graphs, with each node having a prescribed degree.
I am trying to generate a random graph that has small-world properties (exhibits a power law distribution). I just started using the networkx package and discovered that it offers a variety of random graph generation. Can someone tell me if it possible to generate a graph where a given node's degree follows a gamma distribution (either in R or using python's networkx package)?
0
1
9,242
0
4,329,072
0
0
0
0
2
false
7
2010-12-01T20:33:00.000
2
4
0
Generating a graph with certain degree distribution?
4,328,837
0.099668
python,algorithm,r,graph,networkx
I did this a while ago in base Python... IIRC, I used the following method. From memory, so this may not be entirely accurate, but hopefully it's worth something: Chose the number of nodes, N, in your graph, and the density (existing edges over possible edges), D. This implies the number of edges, E. For each node, assign its degree by first choosing a random positive number x and finding P(x), where P is your pdf. The node's degree is (P(x)*E/2) -1. Chose a node at random, and connect it to another random node. If either node has realized its assigned degree, eliminate it from further selection. Repeat E times. N.B. that this doesn't create a connected graph in general.
I am trying to generate a random graph that has small-world properties (exhibits a power law distribution). I just started using the networkx package and discovered that it offers a variety of random graph generation. Can someone tell me if it possible to generate a graph where a given node's degree follows a gamma distribution (either in R or using python's networkx package)?
0
1
9,242
0
4,345,485
0
0
0
0
1
true
4
2010-12-03T12:04:00.000
3
1
0
Python Imaging, how to quantize an image to 16bit depth?
4,345,337
1.2
python,python-imaging-library,imaging
You might want to look into converting your image to a numpy array, performing your quantisation, then converting back to PIL. There are modules in numpy to convert to/from PIL images.
I would like to quantize a 24bit image to 16bit color depth using Python Imaging. PIL used to provide a method im.quantize(colors, **options) however this has been deprecated for out = im.convert("P", palette=Image.ADAPTIVE, colors=256) Unfortunately 256 is the MAXIMUM number of colors that im.convert() will quantize to (8 bit only). How can I quantize a 24bit image down to 16bit using PIL (or similar)? thanks
0
1
4,447
0
4,348,902
0
0
0
0
1
false
144
2010-12-03T18:41:00.000
2
7
0
Saving interactive Matplotlib figures
4,348,733
0.057081
python,matplotlib
Good question. Here is the doc text from pylab.save: pylab no longer provides a save function, though the old pylab function is still available as matplotlib.mlab.save (you can still refer to it in pylab as "mlab.save"). However, for plain text files, we recommend numpy.savetxt. For saving numpy arrays, we recommend numpy.save, and its analog numpy.load, which are available in pylab as np.save and np.load.
Is there a way to save a Matplotlib figure such that it can be re-opened and have typical interaction restored? (Like the .fig format in MATLAB?) I find myself running the same scripts many times to generate these interactive figures. Or I'm sending my colleagues multiple static PNG files to show different aspects of a plot. I'd rather send the figure object and have them interact with it themselves.
0
1
98,342
0
4,368,488
0
1
0
0
1
false
4
2010-12-06T16:11:00.000
1
7
0
Symmetric dictionary where d[a][b] == d[b][a]
4,368,423
0.028564
python,inheritance,dictionary
An obvious alternative is to use a (v1,v2) tuple as the key into a single standard dict, and insert both (v1,v2) and (v2,v1) into the dictionary, making them refer to the same object on the right-hand side.
I have an algorithm in python which creates measures for pairs of values, where m(v1, v2) == m(v2, v1) (i.e. it is symmetric). I had the idea to write a dictionary of dictionaries where these values are stored in a memory-efficient way, so that they can easily be retrieved with keys in any order. I like to inherit from things, and ideally, I'd love to write a symmetric_dict where s_d[v1][v2] always equals s_d[v2][v1], probably by checking which of the v's is larger according to some kind of ordering relation and then switching them around so that the smaller element one is always mentioned first. i.e. when calling s_d[5][2] = 4, the dict of dicts will turn them around so that they are in fact stored as s_d[2][5] = 4, and the same for retrieval of the data. I'm also very open for a better data structure, but I'd prefer an implementation with "is-a" relationship to something which just uses a dict and preprocesses some function arguments.
0
1
1,589
0
4,854,162
0
0
0
0
2
true
15
2010-12-08T17:01:00.000
4
3
0
Does PyPy work with NLTK?
4,390,129
1.2
python,nltk,pypy
I got a response via email (Seo, please feel free to respond here) that said: The main issues are: PyPy implements Python 2.5. This means adding "from future import with_statement" here and there, rewriting usages of property.setter, and fixing up new in 2.6 library calls like os.walk. NLTK needs PyYAML. Simply symlinking (or copying) stuffs to pypy-1.4/site-packages work. And: Do you have NLTK running with PyPy, and if so are you seeing performance improvements? Yes, and yes. So apparently NLTK does run with PyPy and there are performance improvements.
Does PyPy work with NLTK, and if so, is there an appreciable performance improvement, say for the bayesian classifier? While we're at it, do any of the other python environments (shedskin, etc) offer better nlkt performance than cpython?
0
1
2,318
0
4,549,093
0
0
0
0
2
false
15
2010-12-08T17:01:00.000
5
3
0
Does PyPy work with NLTK?
4,390,129
0.321513
python,nltk,pypy
At least some of NLTK does work with PyPy and there is some performance gain, according to someone on #pypy on freenode. Have you run any tests? Just download PyPy from pypy.org/download.html and instead of "time python yourscript.py data.txt" type "time pypy yourscript.py data.txt".
Does PyPy work with NLTK, and if so, is there an appreciable performance improvement, say for the bayesian classifier? While we're at it, do any of the other python environments (shedskin, etc) offer better nlkt performance than cpython?
0
1
2,318
0
4,450,277
0
1
0
0
1
false
14
2010-12-15T13:02:00.000
8
7
0
easy save/load of data in python
4,450,144
1
python,io
If it should be human-readable, I'd also go with JSON. Unless you need to exchange it with enterprise-type people, they like XML better. :-) If it should be human editable and isn't too complex, I'd probably go with some sort of INI-like format, like for example configparser. If it is complex, and doesn't need to be exchanged, I'd go with just pickling the data, unless it's very complex, in which case I'd use ZODB. If it's a LOT of data, and needs to be exchanged, I'd use SQL. That pretty much covers it, I think.
What is the easiest way to save and load data in python, preferably in a human-readable output format? The data I am saving/loading consists of two vectors of floats. Ideally, these vectors would be named in the file (e.g. X and Y). My current save() and load() functions use file.readline(), file.write() and string-to-float conversion. There must be something better.
0
1
62,035
0
4,460,959
0
1
0
0
1
false
42
2010-12-16T12:49:00.000
0
10
0
Extract the first paragraph from a Wikipedia article (Python)
4,460,921
0
python,wikipedia
Try a combination of urllib to fetch the site and BeautifulSoup or lxml to parse the data.
How can I extract the first paragraph from a Wikipedia article, using Python? For example, for Albert Einstein, that would be: Albert Einstein (pronounced /ˈælbərt ˈaɪnstaɪn/; German: [ˈalbɐt ˈaɪnʃtaɪn] ( listen); 14 March 1879 – 18 April 1955) was a theoretical physicist, philosopher and author who is widely regarded as one of the most influential and iconic scientists and intellectuals of all time. A German-Swiss Nobel laureate, Einstein is often regarded as the father of modern physics.[2] He received the 1921 Nobel Prize in Physics "for his services to theoretical physics, and especially for his discovery of the law of the photoelectric effect".[3]
0
1
49,932
0
6,335,027
0
0
0
0
1
false
2
2010-12-20T10:32:00.000
0
2
0
Python usage of breadth-first search on social graph
4,488,783
0
python,algorithm,social-networking,traversal,breadth-first-search
I have around 300 friends in facebook and some of my friends also have 300 friends on an average. If you gonna build a graph out of it , it's gonna be huge . Correct me , if I am wrong ? . A BFS will be quit lot demanding in this scenario ? Thanks J
I've been reading a lot of stackoverflow questions about how to use the breadth-first search, dfs, A*, etc, the question is what is the optimal usage and how to implement it in reality verse simulated graphs. E.g. Consider you have a social graph of Twitter/Facebook/Some social networking site, to me it seems a search algorithm would work as follows: If user A had 10 friends, then one of those had 2 friends and another 3. The search would first figure out who user A's friends were, then it would have to look up who the friends where to each of the ten users. To me this seems like bfs? However, I'm not sure if that's the way to go about implementing the algorithm. Thanks,
0
1
1,271
0
4,516,073
0
0
0
0
3
false
7
2010-12-23T05:06:00.000
-1
4
0
CvSize does not exist?
4,516,007
-0.049958
python,opencv,computer-vision
Perhaps the documentation is wrong and you have to use cv.cvSize instead of cv.CvSize ? Also, do a dir(cv) to find out the methods available to you.
I have installed the official python bindings for OpenCv and I am implementing some standard textbook functions just to get used to the python syntax. I have run into the problem, however, that CvSize does not actually exist, even though it is documented on the site... The simple function: blah = cv.CvSize(inp.width/2, inp.height/2) yields the error 'module' object has no attribute 'CvSize'. I have imported with 'import cv'. Is there an equivalent structure? Do I need something more? Thanks.
0
1
7,873
0
6,534,684
0
0
0
0
3
false
7
2010-12-23T05:06:00.000
8
4
0
CvSize does not exist?
4,516,007
1
python,opencv,computer-vision
It seems that they opted to eventually avoid this structure altogether. Instead, it just uses a python tuple (width, height).
I have installed the official python bindings for OpenCv and I am implementing some standard textbook functions just to get used to the python syntax. I have run into the problem, however, that CvSize does not actually exist, even though it is documented on the site... The simple function: blah = cv.CvSize(inp.width/2, inp.height/2) yields the error 'module' object has no attribute 'CvSize'. I have imported with 'import cv'. Is there an equivalent structure? Do I need something more? Thanks.
0
1
7,873
0
5,974,122
0
0
0
0
3
false
7
2010-12-23T05:06:00.000
0
4
0
CvSize does not exist?
4,516,007
0
python,opencv,computer-vision
The right call is cv.cvSize(inp.width/2, inp.height/2). All functions in the python opencv bindings start with a lowercased c even in the highgui module.
I have installed the official python bindings for OpenCv and I am implementing some standard textbook functions just to get used to the python syntax. I have run into the problem, however, that CvSize does not actually exist, even though it is documented on the site... The simple function: blah = cv.CvSize(inp.width/2, inp.height/2) yields the error 'module' object has no attribute 'CvSize'. I have imported with 'import cv'. Is there an equivalent structure? Do I need something more? Thanks.
0
1
7,873
0
4,523,953
0
0
0
0
1
true
1
2010-12-23T23:38:00.000
1
1
0
NumPy Under Xen Client System
4,523,267
1.2
python,ubuntu,numpy,virtualization,xen
Yes. The optimizations run in userland and so shouldn't cause any PV traps.
I am working on a project built on NumPy, and I would like to take advantage of some of NumPy's optional architecture-specific optimizations. If I install NumPy on a paravirtualized Xen client OS (Ubuntu, in this case - a Linode), can I take advantage of those optimizations?
0
1
99
0
4,535,370
0
0
0
0
1
true
4
2010-12-26T20:47:00.000
5
1
0
Colon difference in Matlab and Python
4,535,359
1.2
python,arrays,matlab,syntax
someArray[:,0,0] is the Python NumPy equivalent of MATLAB's someArray(:,1,1). I've never figured out how to do it in pure Python, the colon slice operation is a total mystery to me with lists-of-lists.
What is the equivalent to someArray(:,1,1) in python from Matlab? In python someArray[:][0][0] produces a different value
0
1
793
0
11,489,099
0
0
0
0
1
false
3
2010-12-27T08:18:00.000
1
4
1
Iterative MapReduce
4,537,422
0.049958
python,streaming,hadoop,mapreduce,iteration
You needn't write another job. You can put the same job in a loop ( a while loop) and just keep changing the parameters of the job, so that when the mapper and reducer complete their processing, the control starts with creating a new configuration, and then you just automatically have an input file that is the output of the previous phase.
I've written a simple k-means clustering code for Hadoop (two separate programs - mapper and reducer). The code is working over a small dataset of 2d points on my local box. It's written in Python and I plan to use Streaming API. I would like suggestions on how best to run this program on Hadoop. After each run of mapper and reducer, new centres are generated. These centres are input for the next iteration. From what I can see, each mapreduce iteration will have to be a separate mapreduce job. And it looks like I'll have to write another script (python/bash) to extract the new centres from HDFS after each reduce phase, and feed it back to mapper. Any other easier, less messier way? If the cluster happens to use a fair scheduler, It will be very long before this computation completes?
0
1
3,152