[{"Question":"A researcher has created a small simulation in MATLAB and we want to make it accessible to others. My plan is to take the simulation, clean up a few things and turn it into a set of functions. Then I plan to compile it into a C library and use SWIG to create a Python wrapper. At that point, I should be able to call the simulation from a small Django app. At least I hope so.\nDo I have the right plan? Has anyone else done something similar? Can you let me know if there are any serious pitfalls that I'm not aware of at the moment?","AnswerCount":4,"Available Count":3,"Score":0.2449186624,"is_accepted":false,"ViewCount":2313,"Q_Id":5136,"Users Score":5,"Answer":"One thing to remember is that the Matlab compiler does not actually compile the Matlab code into native machine instructions. It simply wraps it into a standalone executable or a library with its own runtime engine that runs it. You would be able to run your code without Matlab installed, and you would be able to interface it with other languages, but it will still be interpreted Matlab code, so there would be no speedup.","Q_Score":14,"Tags":"python,c,matlab","A_Id":1659332,"CreationDate":"2008-08-07T18:47:00.000","Title":"Does anyone have experience creating a shared library in MATLAB?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A researcher has created a small simulation in MATLAB and we want to make it accessible to others. My plan is to take the simulation, clean up a few things and turn it into a set of functions. Then I plan to compile it into a C library and use SWIG to create a Python wrapper. At that point, I should be able to call the simulation from a small Django app. At least I hope so.\nDo I have the right plan? Has anyone else done something similar? Can you let me know if there are any serious pitfalls that I'm not aware of at the moment?","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":2313,"Q_Id":5136,"Users Score":2,"Answer":"I'd also try ctypes first. \n\nUse the Matlab compiler to compile the code into C. \nCompile the C code into a DLL.\nUse ctypes to load and call code from this DLL\n\nThe hardest step is probably 1, but if you already know Matlab and have used the Matlab compiler, you should not have serious problems with it.","Q_Score":14,"Tags":"python,c,matlab","A_Id":138534,"CreationDate":"2008-08-07T18:47:00.000","Title":"Does anyone have experience creating a shared library in MATLAB?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A researcher has created a small simulation in MATLAB and we want to make it accessible to others. My plan is to take the simulation, clean up a few things and turn it into a set of functions. Then I plan to compile it into a C library and use SWIG to create a Python wrapper. At that point, I should be able to call the simulation from a small Django app. At least I hope so.\nDo I have the right plan? Has anyone else done something similar? Can you let me know if there are any serious pitfalls that I'm not aware of at the moment?","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":2313,"Q_Id":5136,"Users Score":3,"Answer":"I won't help much but I remember that I was able to wrap a MATLAB simulation into DLL and then call it from a Delphi app. It worked really well.","Q_Score":14,"Tags":"python,c,matlab","A_Id":5302,"CreationDate":"2008-08-07T18:47:00.000","Title":"Does anyone have experience creating a shared library in MATLAB?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using the zip function, Python allows for loops to traverse multiple sequences in parallel. \nfor (x,y) in zip(List1, List2):\nDoes MATLAB have an equivalent syntax? If not, what is the best way to iterate over two parallel arrays at the same time using MATLAB?","AnswerCount":7,"Available Count":1,"Score":-0.057080742,"is_accepted":false,"ViewCount":7417,"Q_Id":49307,"Users Score":-2,"Answer":"for loops in MATLAB used to be slow, but this is not true anymore.\nSo vectorizing is not always the miracle solution. Just use the profiler, and tic and toc functions to help you identify possible bottlenecks.","Q_Score":14,"Tags":"python,arrays,matlab,for-loop","A_Id":138886,"CreationDate":"2008-09-08T08:25:00.000","Title":"Can parallel traversals be done in MATLAB just as in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I understand that these methods are for pickling\/unpickling and have no relation to the reduce built-in function, but what's the difference between the 2 and why do we need both?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":9908,"Q_Id":150284,"Users Score":9,"Answer":"__reduce_ex__ is what __reduce__ should have been but never became. __reduce_ex__ works like __reduce__ but the pickle protocol is passed.","Q_Score":19,"Tags":"python,pickle","A_Id":150318,"CreationDate":"2008-09-29T19:31:00.000","Title":"What is the difference between __reduce__ and __reduce_ex__?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a simulator in Python, and am curious about options and opinions regarding basic 2D animations. By animation, I'm referring to rendering on the fly, not displaying prerendered images.\nI'm currently using matplotlib (Wxagg backend), and it's possible that I'll be able to continue using it, but I suspect it won't be able to sufficiently scale in terms of performance or capabilities.\nRequirements are:\n\nCross-platform (Linux, MacOS X,\nWindows)\nLow complexity overhead\nPlays well with wxpython (at least won't step on each other's toes unduly)\nInteractivity. Detect when objects are clicked on, moused over, etc. \n\nNote that high performance isn't on the list, but the ability to handle ~100 bitmap objects on the screen would be good.\nYour thoughts?","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":29957,"Q_Id":169810,"Users Score":3,"Answer":"You can try pygame, its very easy to handle and similar to SDL under c++","Q_Score":12,"Tags":"python,animation,2d","A_Id":1568711,"CreationDate":"2008-10-04T05:36:00.000","Title":"2D animation in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":0.047583087,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":5,"Answer":"Seems to be pure inertia. Where it is in use, everyone is too busy to learn IDL or numpy in sufficient detail to switch, and don't want to rewrite good working programs. Luckily that's not strictly true, but true enough in enough places that Matlab will be around a long time. Like Fortran (in active use where i work!)","Q_Score":53,"Tags":"python,matlab","A_Id":181127,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":0.0190453158,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":2,"Answer":"Matlab is good at doing number crunching. Also Matrix and matrix manipulation. It has many helpful built in libraries(depends on the what version) I think it is easier to use than python if you are going to be calculating equations.","Q_Score":53,"Tags":"python,matlab","A_Id":1890839,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":1.0,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":13,"Answer":"MATLAB is great for doing array manipulation, doing specialized math functions, and for creating nice plots quick.\nI'd probably only use it for large programs if I could use a lot of array\/matrix manipulation.\nYou don't have to worry about the IDE as much as in more formal packages, so it's easier for students without a lot of programming experience to pick up.","Q_Score":53,"Tags":"python,matlab","A_Id":179910,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":0.0380768203,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":4,"Answer":"The main reason it is useful in industry is the plug-ins built on top of the core functionality. Almost all active Matlab development for the last few years has focused on these. \nUnfortunately, you won't have much opportunity to use these in an academic environment.","Q_Score":53,"Tags":"python,matlab","A_Id":179932,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":0.0380768203,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":4,"Answer":"One reason MATLAB is popular with universities is the same reason a lot of things are popular with universities: there's a lot of professors familiar with it, and it's fairly robust.\nI've spoken to a lot of folks who are especially interested in MATLAB's nascent ability to tap into the GPU instead of working serially. Having used Python in grad school, I kind of wish I had the licks to work with MATLAB in that case. It sure would make vector space calculations a breeze.","Q_Score":53,"Tags":"python,matlab","A_Id":180012,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":0.0285636566,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":3,"Answer":"I think you answered your own question when you noted that Matlab is \"cool to work with matrixes and plotting things\". Any application that requires a lot of matrix maths and visualisation will probably be easiest to do in Matlab.\nThat said, Matlab's syntax feels awkward and shows the language's age. In contrast, Python is a much nicer general purpose programming language and, with the right libraries can do much of what Matlab does. However, Matlab is always going to have a more concise syntax than Python for vector and matrix manipulation. \nIf much of your programming involves these sorts of manipulations, such as in signal processing and some statistical techniques, then Matlab will be a better choice.","Q_Score":53,"Tags":"python,matlab","A_Id":181295,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":1.0,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":15,"Answer":"Hold everything. When's the last time you programed your calculator to play tetris? Did you actually think you could write anything you want in those 128k of RAM? Likely not. MATLAB is not for programming unless you're dealing with huge matrices. It's the graphing calculator you whip out when you've got Megabytes to Gigabytes of data to crunch and\/or plot. Learn just basic stuff, but also don't kill yourself trying to make Python be a graphing calculator.\nYou'll quickly get a feel for when you want to crunch, plot or explore in MATLAB and when you want to have all that Python offers. Lots of engineers turn to pre and post processing in Python or Perl. Occasionally even just calling out to MATLAB for the hard bits.\nThey are such completely different tools that you should learn their basic strengths first without trying to replace one with the other. Granted for saving money I'd either use Octave or skimp on ease and learn to work with sparse matrices in Perl or Python.","Q_Score":53,"Tags":"python,matlab","A_Id":181492,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":1.0,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":34,"Answer":"I've been using matlab for many years in my research. It's great for linear algebra and has a large set of well-written toolboxes. The most recent versions are starting to push it into being closer to a general-purpose language (better optimizers, a much better object model, richer scoping rules, etc.). \nThis past summer, I had a job where I used Python + numpy instead of Matlab. I enjoyed the change of pace. It's a \"real\" language (and all that entails), and it has some great numeric features like broadcasting arrays. I also really like the ipython environment. \nHere are some things that I prefer about Matlab:\n\nconsistency: MathWorks has spent a lot of effort making the toolboxes look and work like each other. They haven't done a perfect job, but it's one of the best I've seen for a codebase that's decades old.\ndocumentation: I find it very frustrating to figure out some things in numpy and\/or python because the documentation quality is spotty: some things are documented very well, some not at all. It's often most frustrating when I see things that appear to mimic Matlab, but don't quite work the same. Being able to grab the source is invaluable (to be fair, most of the Matlab toolboxes ship with source too)\ncompactness: for what I do, Matlab's syntax is often more compact (but not always)\nmomentum: I have too much Matlab code to change now\n\nIf I didn't have such a large existing codebase, I'd seriously consider switching to Python + numpy.","Q_Score":53,"Tags":"python,matlab","A_Id":193386,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":0.0285636566,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":3,"Answer":"It's been some time since I've used Matlab, but from memory it does provide (albeit with extra plugins) the ability to generate source to allow you to realise your algorithm on a DSP.\nSince python is a general purpose programming language there is no reason why you couldn't do everything in python that you can do in matlab. However, matlab does provide a number of other tools - eg. a very broad array of dsp features, a broad array of S and Z domain features.\nAll of these could be hand coded in python (since it's a general purpose language), but if all you're after is the results perhaps spending the money on Matlab is the cheaper option?\nThese features have also been tuned for performance. eg. The documentation for Numpy specifies that their Fourier transform is optimised for power of 2 point data sets. As I understand Matlab has been written to use the most efficient Fourier transform to suit the size of the data set, not just power of 2.\nedit: Oh, and in Matlab you can produce some sensational looking plots very easily, which is important when you're presenting your data. Again, certainly not impossible using other tools.","Q_Score":53,"Tags":"python,matlab","A_Id":180736,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":1.0,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":7,"Answer":"Personally, I tend to think of Matlab as an interactive matrix calculator and plotting tool with a few scripting capabilities, rather than as a full-fledged programming language like Python or C. The reason for its success is that matrix stuff and plotting work out of the box, and you can do a few very specific things in it with virtually no actual programming knowledge. The language is, as you point out, extremely frustrating to use for more general-purpose tasks, such as even the simplest string processing. Its syntax is quirky, and it wasn't created with the abstractions necessary for projects of more than 100 lines or so in mind.\nI think the reason why people try to use Matlab as a serious programming language is that most engineers (there are exceptions; my degree is in biomedical engineering and I like programming) are horrible programmers and hate to program. They're taught Matlab in college mostly for the matrix math, and they learn some rudimentary programming as part of learning Matlab, and just assume that Matlab is good enough. I can't think of anyone I know who knows any language besides Matlab, but still uses Matlab for anything other than a few pure number crunching applications.","Q_Score":53,"Tags":"python,matlab","A_Id":181274,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":1.0,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":6,"Answer":"I believe you have a very good point and it's one that has been raised in the company where I work. The company is limited in it's ability to apply matlab because of the licensing costs involved. One developer proved that Python was a very suitable replacement but it fell on ignorant ears because to the owners of those ears...\n\nNo-one in the company knew Python although many of us wanted to use it. \nMatLab has a name, a company, and task force behind it to solve any problems.\nThere were some (but not a lot) of legacy MatLab projects that would need to be re-written.\n\nIf it's worth \u00a310,000 (??) it's gotta be worth it!! \nI'm with you here. Python is a very good replacement for MatLab.\nI should point out that I've been told the company uses maybe 5% to 10% of MatLabs capabilities and that is the basis for my agreement with the original poster","Q_Score":53,"Tags":"python,matlab","A_Id":180017,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":1.0,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":6,"Answer":"The most likely reason that it's used so much in universities is that the mathematics faculty are used to it, understand it, and know how to incorporate it into their curriculum.","Q_Score":53,"Tags":"python,matlab","A_Id":179912,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been recently asked to learn some MATLAB basics for a class.\nWhat does make it so cool for researchers and people that works in university?\nI saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries).\nWriting a function or parsing a file is just painful. I'm still at the start, what am I missing?\nIn the \"real\" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing.\n\nUPDATE 1: One of the things I'd like to know the most is \"Am I missing something?\" :D\nUPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.","AnswerCount":21,"Available Count":13,"Score":0.0380768203,"is_accepted":false,"ViewCount":200558,"Q_Id":179904,"Users Score":4,"Answer":"I know this question is old, and therefore may no longer be\nwatched, but I felt it was necessary to comment. As an\naerospace engineer at Georgia Tech, I can say, with no\nqualms, that MATLAB is awesome. You can have it quickly\ninterface with your Excel spreadsheets to pull in data about\nhow high and fast rockets are flying, how the wind affects\nthose same rockets, and how different engines matter. Beyond\nrocketry, similar concepts come into play for cars, trucks,\naircraft, spacecraft, and even athletics. You can pull in\nlarge amounts of data, manipulate all of it, and make sure\nyour results are as they should be. In the event something is\noff, you can add a line break where an error occurs to debug\nyour program without having to recompile every time you want\nto run your program. Is it slower than some other programs?\nWell, technically. I'm sure if you want to do the number\ncrunching it's great for on an NVIDIA graphics processor, it\nwould probably be faster, but it requires a lot more effort\nwith harder debugging.\nAs a general programming language, MATLAB is weak. It's not\nmeant to work against Python, Java, ActionScript, C\/C++ or\nany other general purpose language. It's meant for the\nengineering and mathematics niche the name implies, and it\ndoes so fantastically.","Q_Score":53,"Tags":"python,matlab","A_Id":1113065,"CreationDate":"2008-10-07T19:11:00.000","Title":"What is MATLAB good for? Why is it so used by universities? When is it better than Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I get the inverse of a matrix in python? I've implemented it myself, but it's pure python, and I suspect there are faster modules out there to do it.","AnswerCount":7,"Available Count":1,"Score":0.0285636566,"is_accepted":false,"ViewCount":125109,"Q_Id":211160,"Users Score":1,"Answer":"If you hate numpy, get out RPy and your local copy of R, and use it instead.\n(I would also echo to make you you really need to invert the matrix. In R, for example, linalg.solve and the solve() function don't actually do a full inversion, since it is unnecessary.)","Q_Score":62,"Tags":"python,algorithm,matrix,linear-algebra,matrix-inverse","A_Id":213717,"CreationDate":"2008-10-17T05:30:00.000","Title":"Python Inverse of a Matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to figure out the following problem.\nI am building Yet another math function grapher, The function is drawn on its predefined x,y range, that's all good.\nNow I am working on the background and the ticking of X, Y axes (if any axes are shown).\nI worked out the following.\nI have a fixed width of 250 p\nThe tick gap should be between 12.5 and 50p.\nThe ticks should indicate either unit or half unit range, by that i mean the following.\nx range (-5, 5): one tick = 1\nx range (-1, 1): one tick = 0.5 or 0.1 depending on the gap that each of this option would generate. \nx range (0.1, 0.3): 0.05 \nGiven a Xrange\nHow would you get the number of ticks between either full or half unit range ?\nOr maybe there are other way to approach this type of problems.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":239,"Q_Id":346823,"Users Score":0,"Answer":"Using deltaX\nif deltax between 2 and 10 half increment\nif deltax between 10 and 20 unit increment\nif smaller than 2 we multiply by 10 and test again\nif larger than 20 we divide \nThen we get the position of the first unit or half increment on the width using xmin.\nI still need to test this solution.","Q_Score":2,"Tags":"python,algorithm,math","A_Id":346873,"CreationDate":"2008-12-06T21:40:00.000","Title":"Ticking function grapher","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently I needed to do weighted random selection of elements from a list, both with and without replacement. While there are well known and good algorithms for unweighted selection, and some for weighted selection without replacement (such as modifications of the resevoir algorithm), I couldn't find any good algorithms for weighted selection with replacement. I also wanted to avoid the resevoir method, as I was selecting a significant fraction of the list, which is small enough to hold in memory.\nDoes anyone have any suggestions on the best approach in this situation? I have my own solutions, but I'm hoping to find something more efficient, simpler, or both.","AnswerCount":9,"Available Count":2,"Score":0.0886555158,"is_accepted":false,"ViewCount":32149,"Q_Id":352670,"Users Score":4,"Answer":"The following is a description of random weighted selection of an element of a \nset (or multiset, if repeats are allowed), both with and without replacement in O(n) space \nand O(log n) time.\nIt consists of implementing a binary search tree, sorted by the elements to be \nselected, where each node of the tree contains:\n\nthe element itself (element)\nthe un-normalized weight of the element (elementweight), and\nthe sum of all the un-normalized weights of the left-child node and all of \nits children (leftbranchweight).\nthe sum of all the un-normalized weights of the right-child node and all of\nits chilren (rightbranchweight).\n\nThen we randomly select an element from the BST by descending down the tree. A \nrough description of the algorithm follows. The algorithm is given a node of\nthe tree. Then the values of leftbranchweight, rightbranchweight, \nand elementweight of node is summed, and the weights are divided by this \nsum, resulting in the values leftbranchprobability, \nrightbranchprobability, and elementprobability, respectively. Then a \nrandom number between 0 and 1 (randomnumber) is obtained.\n\nif the number is less than elementprobability,\n\n\nremove the element from the BST as normal, updating leftbranchweight\nand rightbranchweight of all the necessary nodes, and return the \nelement.\n\nelse if the number is less than (elementprobability + leftbranchweight)\n\n\nrecurse on leftchild (run the algorithm using leftchild as node)\n\nelse \n\n\nrecurse on rightchild\n\n\nWhen we finally find, using these weights, which element is to be returned, we either simply return it (with replacement) or we remove it and update relevant weights in the tree (without replacement).\nDISCLAIMER: The algorithm is rough, and a treatise on the proper implementation \nof a BST is not attempted here; rather, it is hoped that this answer will help \nthose who really need fast weighted selection without replacement (like I do).","Q_Score":52,"Tags":"python,algorithm,random,random-sample","A_Id":9827070,"CreationDate":"2008-12-09T13:15:00.000","Title":"Weighted random selection with and without replacement","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently I needed to do weighted random selection of elements from a list, both with and without replacement. While there are well known and good algorithms for unweighted selection, and some for weighted selection without replacement (such as modifications of the resevoir algorithm), I couldn't find any good algorithms for weighted selection with replacement. I also wanted to avoid the resevoir method, as I was selecting a significant fraction of the list, which is small enough to hold in memory.\nDoes anyone have any suggestions on the best approach in this situation? I have my own solutions, but I'm hoping to find something more efficient, simpler, or both.","AnswerCount":9,"Available Count":2,"Score":0.022218565,"is_accepted":false,"ViewCount":32149,"Q_Id":352670,"Users Score":1,"Answer":"This is an old question for which numpy now offers an easy solution so I thought I would mention it. Current version of numpy is version 1.2 and numpy.random.choice allows the sampling to be done with or without replacement and with given weights.","Q_Score":52,"Tags":"python,algorithm,random,random-sample","A_Id":66553611,"CreationDate":"2008-12-09T13:15:00.000","Title":"Weighted random selection with and without replacement","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As per the title. I am trying to create a simple scater plot, but haven't found any Python 3.0 libraries that can do it. Note, this isn't for a website, so the web ones are a bit useless.","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":1540,"Q_Id":418835,"Users Score":1,"Answer":"Maybe you can use Python Imaging Library (PIL).\nAlso have a look at PyX, but this library is meant to output to PDF, ...","Q_Score":3,"Tags":"python,python-3.x,plot,graphing,scatter-plot","A_Id":421947,"CreationDate":"2009-01-07T01:12:00.000","Title":"Are there any graph\/plotting\/anything-like-that libraries for Python 3.0?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What's the easiest way to shuffle an array with python?","AnswerCount":11,"Available Count":1,"Score":0.072599319,"is_accepted":false,"ViewCount":285289,"Q_Id":473973,"Users Score":4,"Answer":"In addition to the previous replies, I would like to introduce another function.\nnumpy.random.shuffle as well as random.shuffle perform in-place shuffling. However, if you want to return a shuffled array numpy.random.permutation is the function to use.","Q_Score":325,"Tags":"python,arrays,random,shuffle","A_Id":40674024,"CreationDate":"2009-01-23T18:34:00.000","Title":"Shuffle an array with python, randomize array item order with python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"edit: Initially I was trying to be general but it came out vague. I've included more detail below. \nI'm writing a script that pulls in data from two large CSV files, one of people's schedules and the other of information about their schedules. The data is mined and combined to eventually create pajek format graphs for Monday-Sat of peoples connections, with a seventh graph representing all connections over the week with a string of 1's and 0's to indicate which days of the week the connections are made. This last graph is a break from the pajek format and is used by a seperate program written by another researcher.\nPajek format has a large header, and then lists connections as (vertex1 vertex2) unordered pairs. It's difficult to store these pairs in a dictionary, because there are often multiple connections on the same day between two pairs.\nI'm wondering what the best way to output to these graphs are. Should I make the large single graph and have a second script deconstruct it into several smaller graphs? Should I keep seven streams open and as I determine a connection write to them, or should I keep some other data structure for each and output them when I can (like a queue)?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":593,"Q_Id":555146,"Users Score":2,"Answer":"I would open seven file streams as accumulating them might be quite memory extensive if it's a lot of data. Of course that is only an option if you can sort them live and don't first need all data read to do the sorting.","Q_Score":0,"Tags":"python,file-io","A_Id":555159,"CreationDate":"2009-02-17T00:29:00.000","Title":"Multiple output files","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to load (de-serialize) a pre-computed list of integers from a file in a Python script (into a Python list). The list is large (upto millions of items), and I can choose the format I store it in, as long as loading is fastest.\nWhich is the fastest method, and why?\n\nUsing import on a .py file that just contains the list assigned to a variable\nUsing cPickle's load\nSome other method (perhaps numpy?)\n\nAlso, how can one benchmark such things reliably?\nAddendum: measuring this reliably is difficult, because import is cached so it can't be executed multiple times in a test. The loading with pickle also gets faster after the first time probably because page-precaching by the OS. Loading 1 million numbers with cPickle takes 1.1 sec the first time run, and 0.2 sec on subsequent executions of the script.\nIntuitively I feel cPickle should be faster, but I'd appreciate numbers (this is quite a challenge to measure, I think). \nAnd yes, it's important for me that this performs quickly.\nThanks","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":8359,"Q_Id":556730,"Users Score":1,"Answer":"cPickle will be the fastest since it is saved in binary and no real python code has to be parsed.\nOther advantates are that it is more secure (since it does not execute commands) and you have no problems with setting $PYTHONPATH correctly.","Q_Score":11,"Tags":"python,serialization,caching","A_Id":556961,"CreationDate":"2009-02-17T13:16:00.000","Title":"Python list serialization - fastest method","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working at some plots and statistics for work and I am not sure how I can do some statistics using numpy: I have a list of prices and another one of basePrices. And I want to know how many prices are with X percent above basePrice, how many are with Y percent above basePrice.\nIs there a simple way to do that using numpy?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1834,"Q_Id":570137,"Users Score":1,"Answer":"In addition to df's answer, if you want to know the specific prices that are above the base prices, you can do:\nprices[prices > (1.10 * base_prices)]","Q_Score":1,"Tags":"python,numpy","A_Id":570197,"CreationDate":"2009-02-20T16:04:00.000","Title":"Statistics with numpy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anybody know if Python has an equivalent to Java's SortedSet interface?\nHeres what I'm looking for: lets say I have an object of type foo, and I know how to compare two objects of type foo to see whether foo1 is \"greater than\" or \"less than\" foo2. I want a way of storing many objects of type foo in a list L, so that whenever I traverse the list L, I get the objects in order, according to the comparison method I define.\nEdit:\nI guess I can use a dictionary or a list and sort() it every time I modify it, but is this the best way?","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":9278,"Q_Id":628192,"Users Score":0,"Answer":"Do you have the possibility of using Jython? I just mention it because using TreeMap, TreeSet, etc. is trivial. Also if you're coming from a Java background and you want to head in a Pythonic direction Jython is wonderful for making the transition easier. Though I recognise that use of TreeSet in this case would not be part of such a \"transition\".\nFor Jython superusers I have a question myself: the blist package can't be imported because it uses a C file which must be imported. But would there be any advantage of using blist instead of TreeSet? Can we generally assume the JVM uses algorithms which are essentially as good as those of CPython stuff?","Q_Score":22,"Tags":"python,data-structures","A_Id":20666684,"CreationDate":"2009-03-09T21:58:00.000","Title":"Python equivalent to java.util.SortedSet?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).\nThen I realized I didn't even know if a NN is good for minimization. What do you think?","AnswerCount":8,"Available Count":4,"Score":0.0748596907,"is_accepted":false,"ViewCount":7231,"Q_Id":652283,"Users Score":3,"Answer":"Back-propagation works by minimizing the error. However, you can really minimize whatever you want. So, you could use back-prop-like update rules to find the Artificial Neural Network inputs that minimize the output.\nThis is a big question, sorry for the short answer. I should also add that my suggested approach sounds pretty inefficient compared to more established methods and would only find a local minima.","Q_Score":10,"Tags":"python,artificial-intelligence,neural-network,minimization","A_Id":13611588,"CreationDate":"2009-03-16T21:53:00.000","Title":"Can a neural network be used to find a functions minimum(a)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).\nThen I realized I didn't even know if a NN is good for minimization. What do you think?","AnswerCount":8,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":7231,"Q_Id":652283,"Users Score":0,"Answer":"They're pretty bad for the purpose; one of the big problems of neural networks is that they get stuck in local minima. You might want to look into support vector machines instead.","Q_Score":10,"Tags":"python,artificial-intelligence,neural-network,minimization","A_Id":652348,"CreationDate":"2009-03-16T21:53:00.000","Title":"Can a neural network be used to find a functions minimum(a)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).\nThen I realized I didn't even know if a NN is good for minimization. What do you think?","AnswerCount":8,"Available Count":4,"Score":0.024994793,"is_accepted":false,"ViewCount":7231,"Q_Id":652283,"Users Score":1,"Answer":"The training process of a back-propagation neural network works by minimizing the error from the optimal result. But having a trained neural network finding the minimum of an unknown function would be pretty hard.\nIf you restrict the problem to a specific function class, it could work, and be pretty quick too. Neural networks are good at finding patterns, if there are any.","Q_Score":10,"Tags":"python,artificial-intelligence,neural-network,minimization","A_Id":652327,"CreationDate":"2009-03-16T21:53:00.000","Title":"Can a neural network be used to find a functions minimum(a)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).\nThen I realized I didn't even know if a NN is good for minimization. What do you think?","AnswerCount":8,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":7231,"Q_Id":652283,"Users Score":-5,"Answer":"Neural networks are classifiers. They separate two classes of data elements. They learn this separation (usually) by preclassified data elements. Thus, I say: No, unless you do a major stretch beyond breakage.","Q_Score":10,"Tags":"python,artificial-intelligence,neural-network,minimization","A_Id":652362,"CreationDate":"2009-03-16T21:53:00.000","Title":"Can a neural network be used to find a functions minimum(a)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have binary files no larger than 20Mb in size that have a header section and then a data section containing sequences of uchars. I have Numpy, SciPy, etc. and each library has different ways of loading in the data. Any suggestions for the most efficient methods I should use?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":450,"Q_Id":703262,"Users Score":0,"Answer":"I found that array.fromfile is the fastest methods for homogeneous data.","Q_Score":6,"Tags":"python,input,binaryfiles","A_Id":703588,"CreationDate":"2009-03-31T22:03:00.000","Title":"Most efficient way of loading formatted binary files in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What ready available algorithms could I use to data mine twitter to find out the degrees of separation between 2 people on twitter.\nHow does it change when the social graph keeps changing and updating constantly.\nAnd then, is there any dump of twitter social graph data which I could use rather than making so many API calls to start over.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2436,"Q_Id":785327,"Users Score":0,"Answer":"There was a company offering a dump of the social graph, but it was taken down and no longer available. As you already realized - it is kind of hard, as it is changing all the time.\nI would recommend checking out their social_graph api methods as they give the most info with the least API calls.","Q_Score":3,"Tags":"python,twitter,dump,social-graph","A_Id":817451,"CreationDate":"2009-04-24T10:30:00.000","Title":"Twitter Data Mining: Degrees of separation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I quickly checked numPy but it looks like it's using arrays as vectors? I am looking for a proper Vector3 type that I can instance and work on.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":12341,"Q_Id":786691,"Users Score":2,"Answer":"I don't believe there is anything standard (but I could be wrong, I don't keep up with python that closely).\nIt's very easy to implement though, and you may want to build on top of the numpy array as a container for it anyway, which gives you lots of good (and efficient) bits and pieces.","Q_Score":6,"Tags":"python,vector","A_Id":786758,"CreationDate":"2009-04-24T16:46:00.000","Title":"Is there a Vector3 type in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to read binary MATLAB .mat files in Python?\nI've seen that SciPy has alleged support for reading .mat files, but I'm unsuccessful with it. I installed SciPy version 0.7.0, and I can't find the loadmat() method.","AnswerCount":12,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":566507,"Q_Id":874461,"Users Score":12,"Answer":"There is a great library for this task called: pymatreader.\nJust do as follows:\n\nInstall the package: pip install pymatreader\n\nImport the relevant function of this package: from pymatreader import read_mat\n\nUse the function to read the matlab struct: data = read_mat('matlab_struct.mat')\n\nuse data.keys() to locate where the data is actually stored.\n\n\n\nThe keys will usually look like: dict_keys(['__header__', '__version__', '__globals__', 'data_opp']). Where data_opp will be the actual key which stores the data. The name of this key can ofcourse be changed between different files.\n\n\nLast step - Create your dataframe: my_df = pd.DataFrame(data['data_opp'])\n\nThat's it :)","Q_Score":508,"Tags":"python,matlab,file-io,scipy,mat-file","A_Id":66453257,"CreationDate":"2009-05-17T12:02:00.000","Title":"Read .mat files in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a simple solution using Python to store data as a flat file, such that each line is a string representation of an array that can be easily parsed.\nI'm sure python has library for doing such a task easily but so far all the approaches I have found seemed like it would have been sloppy to get it to work and I'm sure there is a better approach. So far I've tried: \n\nthe array.toFile() method but couldn't figure out how to get it to work with nested arrays of strings, it seemed geared towards integer data.\nLists and sets do not have a toFile method built in, so I would have had to parse and encode it manually. \nCSV seemed like a good approach but this would also require manually parsing it, and did not allow me to simply append new lines at the end - so any new calls the the CSVWriter would overwrite the file existing data.\n\nI'm really trying to avoid using databases (maybe SQLite but it seems a bit overkill) because I'm trying to develop this to have no software prerequisites besides Python.","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":5460,"Q_Id":875228,"Users Score":2,"Answer":"I'm looking for a simple solution using Python to store data as a flat file, such that each line is a string representation of an array that can be easily parsed.\n\nIs the data only ever going to be parsed by Python programs? If not, then I'd avoid pickle et al (shelve and marshal) since they're very Python specific. JSON and YAML have the important advantage that parsers are easily available for most any language.","Q_Score":3,"Tags":"python,file-io,csv,multidimensional-array,fileparsing","A_Id":875525,"CreationDate":"2009-05-17T19:00:00.000","Title":"Simple data storing in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a matrix in the type of a Numpy array. How would I write it to disk it as an image? Any format works (png, jpeg, bmp...). One important constraint is that PIL is not present.","AnswerCount":21,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":767103,"Q_Id":902761,"Users Score":0,"Answer":"I attach an simple routine to convert a npy to an image. Works 100% and it is a piece of cake!\nfrom PIL import Image\nimport matplotlib\nimg = np.load('flair1_slice75.npy')\nmatplotlib.image.imsave(\"G1_flair_75.jpeg\", img)","Q_Score":370,"Tags":"python,image,numpy","A_Id":72331083,"CreationDate":"2009-05-24T00:08:00.000","Title":"Saving a Numpy array as an image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the python OpenCV bindings and at the moment I try to isolate a colorrange. That means I want to filter out everything that is not reddish. \nI tried to take only the red color channel but this includes the white spaces in the Image too. \nWhat is a good way to do that?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2798,"Q_Id":968317,"Users Score":0,"Answer":"How about using a formular like r' = r-(g+b)?","Q_Score":1,"Tags":"python,image-processing,opencv,color-space","A_Id":968351,"CreationDate":"2009-06-09T05:36:00.000","Title":"How to isolate a single color in an image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the python OpenCV bindings and at the moment I try to isolate a colorrange. That means I want to filter out everything that is not reddish. \nI tried to take only the red color channel but this includes the white spaces in the Image too. \nWhat is a good way to do that?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":2798,"Q_Id":968317,"Users Score":1,"Answer":"Use the HSV colorspace. Select pixels that have an H value in the range that you consider to contain \"red,\" and an S value large enough that you do not consider it to be neutral, maroon, brown, or pink. You might also need to throw out pixels with low V's. The H dimension is a circle, and red is right where the circle is split, so your H range will be in two parts, one near 255, the other near 0.","Q_Score":1,"Tags":"python,image-processing,opencv,color-space","A_Id":2204755,"CreationDate":"2009-06-09T05:36:00.000","Title":"How to isolate a single color in an image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to use named entity recognition (NER) to find adequate tags for texts in a database.\nI know there is a Wikipedia article about this and lots of other pages describing NER, I would preferably hear something about this topic from you:\n\nWhat experiences did you make with the various algorithms?\nWhich algorithm would you recommend?\nWhich algorithm is the easiest to implement (PHP\/Python)?\nHow to the algorithms work? Is manual training necessary?\n\nExample:\n\"Last year, I was in London where I saw Barack Obama.\" => Tags: London, Barack Obama\nI hope you can help me. Thank you very much in advance!","AnswerCount":6,"Available Count":1,"Score":-1.0,"is_accepted":false,"ViewCount":9656,"Q_Id":1026925,"Users Score":-11,"Answer":"I don't really know about NER, but judging from that example, you could make an algorithm that searched for capital letters in the words or something like that. For that I would recommend regex as the most easy to implement solution if you're thinking small.\nAnother option is to compare the texts with a database, wich yould match string pre-identified as Tags of interest.\nmy 5 cents.","Q_Score":22,"Tags":"php,python,extract,analysis,named-entity-recognition","A_Id":1026976,"CreationDate":"2009-06-22T12:26:00.000","Title":"Algorithms for named entity recognition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to process uploaded photos with PIL and determine some \"soft\" image metrics like:\n\nis the image contrastful or dull?\ncolorful or monochrome?\nbright or dark?\nis the image warm or cold (regarding light temperature)?\nis there a dominant hue?\n\nthe metrics should be measured in a rating-style, e.g. colorful++++ for a very colorful photo, colorful+ for a rather monochrome image.\nI already noticed PIL's ImageStat Module, that calculates some interesting values for my metrics, e.g. RMS of histogram etc. However, this module is rather poorly documented, so I'm looking for more concrete algorithms to determine these metrics.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":609,"Q_Id":1037090,"Users Score":1,"Answer":"I don't think there are methods that give you a metric exactly for what you want, but the methods that it has, like RMS, takes you a long way there. To do things with color, you can split the image into one layer per color, and get the RMS on each layer, which tells you some of the things you want to know. You can also convert the image in different ways so that you only retain color information, etc.","Q_Score":4,"Tags":"python,python-imaging-library","A_Id":1037217,"CreationDate":"2009-06-24T08:30:00.000","Title":"Simple Image Metrics with PIL","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i'm just testing out the csv component in python, and i am having some trouble with it. \nI have a fairly standard csv string, and the default options all seems to fit with my test, but the result shouldn't group 1, 2, 3, 4 in a row and 5, 6, 7, 8 in a row?\nThanks a lot for any enlightenment provided!\n\nPython 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) \n[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import csv\n>>> c = \"1, 2, 3, 4\\n 5, 6, 7, 8\\n\"\n>>> test = csv.reader(c)\n>>> for t in test:\n... print t\n... \n['1']\n['', '']\n[' ']\n['2']\n['', '']\n[' ']\n['3']\n['', '']\n[' ']\n['4']\n[]\n[' ']\n['5']\n['', '']\n[' ']\n['6']\n['', '']\n[' ']\n['7']\n['', '']\n[' ']\n['8']\n[]\n>>>","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":850,"Q_Id":1083364,"Users Score":2,"Answer":"test = csv.reader(c.split('\\n'))","Q_Score":7,"Tags":"python,csv","A_Id":1083367,"CreationDate":"2009-07-05T02:49:00.000","Title":"python csv question","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i'm just testing out the csv component in python, and i am having some trouble with it. \nI have a fairly standard csv string, and the default options all seems to fit with my test, but the result shouldn't group 1, 2, 3, 4 in a row and 5, 6, 7, 8 in a row?\nThanks a lot for any enlightenment provided!\n\nPython 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) \n[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import csv\n>>> c = \"1, 2, 3, 4\\n 5, 6, 7, 8\\n\"\n>>> test = csv.reader(c)\n>>> for t in test:\n... print t\n... \n['1']\n['', '']\n[' ']\n['2']\n['', '']\n[' ']\n['3']\n['', '']\n[' ']\n['4']\n[]\n[' ']\n['5']\n['', '']\n[' ']\n['6']\n['', '']\n[' ']\n['7']\n['', '']\n[' ']\n['8']\n[]\n>>>","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":850,"Q_Id":1083364,"Users Score":8,"Answer":"csv.reader expects an iterable. You gave it \"1, 2, 3, 4\\n 5, 6, 7, 8\\n\"; iteration produces characters. Try giving it [\"1, 2, 3, 4\\n\", \"5, 6, 7, 8\\n\"] -- iteration will produce lines.","Q_Score":7,"Tags":"python,csv","A_Id":1083376,"CreationDate":"2009-07-05T02:49:00.000","Title":"python csv question","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with Python and MATLAB right now and I have a 2D array in Python that I need to write to a file and then be able to read it into MATLAB as a matrix. Any ideas on how to do this? \nThanks!","AnswerCount":7,"Available Count":2,"Score":0.1137907297,"is_accepted":false,"ViewCount":69405,"Q_Id":1095265,"Users Score":4,"Answer":"You could write the matrix in Python to a CSV file and read it in MATLAB using csvread.","Q_Score":51,"Tags":"python,matlab,file-io,import,matrix","A_Id":1095296,"CreationDate":"2009-07-07T22:55:00.000","Title":"Matrix from Python to MATLAB","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with Python and MATLAB right now and I have a 2D array in Python that I need to write to a file and then be able to read it into MATLAB as a matrix. Any ideas on how to do this? \nThanks!","AnswerCount":7,"Available Count":2,"Score":0.1418931938,"is_accepted":false,"ViewCount":69405,"Q_Id":1095265,"Users Score":5,"Answer":"I would probably use numpy.savetxt('yourfile.mat',yourarray) in Python\nand then yourarray = load('yourfile.mat') in MATLAB.","Q_Score":51,"Tags":"python,matlab,file-io,import,matrix","A_Id":7737622,"CreationDate":"2009-07-07T22:55:00.000","Title":"Matrix from Python to MATLAB","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How would you create an algorithm to solve the following puzzle, \"Mastermind\"?\nYour opponent has chosen four different colours from a set of six (yellow, blue, green, red, orange, purple). You must guess which they have chosen, and in what order. After each guess, your opponent tells you how many (but not which) of the colours you guessed were the right colour in the right place [\"blacks\"] and how many (but not which) were the right colour but in the wrong place [\"whites\"]. The game ends when you guess correctly (4 blacks, 0 whites).\nFor example, if your opponent has chosen (blue, green, orange, red), and you guess (yellow, blue, green, red), you will get one \"black\" (for the red), and two whites (for the blue and green). You would get the same score for guessing (blue, orange, red, purple).\nI'm interested in what algorithm you would choose, and (optionally) how you translate that into code (preferably Python). I'm interested in coded solutions that are:\n\nClear (easily understood)\nConcise\nEfficient (fast in making a guess)\nEffective (least number of guesses to solve the puzzle)\nFlexible (can easily answer questions about the algorithm, e.g. what is its worst case?)\nGeneral (can be easily adapted to other types of puzzle than Mastermind)\n\nI'm happy with an algorithm that's very effective but not very efficient (provided it's not just poorly implemented!); however, a very efficient and effective algorithm implemented inflexibly and impenetrably is not of use.\nI have my own (detailed) solution in Python which I have posted, but this is by no means the only or best approach, so please post more! I'm not expecting an essay ;)","AnswerCount":9,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37167,"Q_Id":1185634,"Users Score":0,"Answer":"To work out the \"worst\" case, instead of using entropic I am looking to the partition that has the maximum number of elements, then select the try that is a minimum for this maximum => This will give me the minimum number of remaining possibility when I am not lucky (which happens in the worst case).\nThis always solve standard case in 5 attempts, but it is not a full proof that 5 attempts are really needed because it could happen that for next step a bigger set possibilities would have given a better result than a smaller one (because easier to distinguish between).\nThough for the \"Standard game\" with 1680 I have a simple formal proof:\nFor the first step the try that gives the minimum for the partition with the maximum number is 0,0,1,1: 256. Playing 0,0,1,2 is not as good: 276.\nFor each subsequent try there are 14 outcomes (1 not placed and 3 placed is impossible) and 4 placed is giving a partition of 1. This means that in the best case (all partition same size) we will get a maximum partition that is a minimum of (number of possibilities - 1)\/13 (rounded up because we have integer so necessarily some will be less and other more, so that the maximum is rounded up).\nIf I apply this:\nAfter first play (0,0,1,1) I am getting 256 left.\nAfter second try: 20 = (256-1)\/13\nAfter third try : 2 = (20-1)\/13\nThen I have no choice but to try one of the two left for the 4th try. \nIf I am unlucky a fifth try is needed.\nThis proves we need at least 5 tries (but not that this is enough).","Q_Score":38,"Tags":"python,algorithm","A_Id":9515347,"CreationDate":"2009-07-26T21:43:00.000","Title":"How to solve the \"Mastermind\" guessing game?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hierarchical Bayes models are commonly used in Marketing, Political Science, and Econometrics. Yet, the only package I know of is bayesm, which is really a companion to a book (Bayesian Statistics and Marketing, by Rossi, et al.) Am I missing something? Is there a software package for R or Python doing the job out there, and\/or a worked-out example in the associated language?","AnswerCount":7,"Available Count":3,"Score":0.057080742,"is_accepted":false,"ViewCount":8690,"Q_Id":1191689,"Users Score":2,"Answer":"I apply hierarchical Bayes models in R in combination with JAGS (Linux) or sometimes WinBUGS (Windows, or Wine). Check out the book of Andrew Gelman, as referred to above.","Q_Score":12,"Tags":"python,r,statistics","A_Id":1832314,"CreationDate":"2009-07-28T02:43:00.000","Title":"Hierarchical Bayes for R or Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hierarchical Bayes models are commonly used in Marketing, Political Science, and Econometrics. Yet, the only package I know of is bayesm, which is really a companion to a book (Bayesian Statistics and Marketing, by Rossi, et al.) Am I missing something? Is there a software package for R or Python doing the job out there, and\/or a worked-out example in the associated language?","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":8690,"Q_Id":1191689,"Users Score":0,"Answer":"This answer comes almost ten years late, but it will hopefully help someone in the future.\nThe brms package in R is a very good option for Bayesian hierarchical\/multilevel models, using a syntax very similar to the lme4 package.\nThe brms package uses the probabilistic programming language Stan in the back to do the inferences. Stan uses more advanced sampling methods than JAGS and BUGS, such as Hamiltonian Monte Carlo, which provides more efficient and reliable samples from the posterior distribution.\nIf you wish to model more complicated phenomena, then you can use the rstanpackage to compile Stan models from R. There is also the Python alternative PyStan. However, in order to do this, you must learn how to use Stan.","Q_Score":12,"Tags":"python,r,statistics","A_Id":55978470,"CreationDate":"2009-07-28T02:43:00.000","Title":"Hierarchical Bayes for R or Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hierarchical Bayes models are commonly used in Marketing, Political Science, and Econometrics. Yet, the only package I know of is bayesm, which is really a companion to a book (Bayesian Statistics and Marketing, by Rossi, et al.) Am I missing something? Is there a software package for R or Python doing the job out there, and\/or a worked-out example in the associated language?","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":8690,"Q_Id":1191689,"Users Score":0,"Answer":"The lme4 package, which estimates hierarchical models using frequentist methods, has a function called mcmcsamp that allows you to sample from the posterior distribution of the model using MCMC. This currently works only for linear models, quite unfortunately.","Q_Score":12,"Tags":"python,r,statistics","A_Id":1197766,"CreationDate":"2009-07-28T02:43:00.000","Title":"Hierarchical Bayes for R or Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a CSV file which I am processing and putting the processed data into a text file.\nThe entire data that goes into the text file is one big table(comma separated instead of space). My problem is How do I remember the column into which a piece of data goes in the text file?\nFor eg. Assume there is a column called 'col'.\nI just put some data under col. Now after a few iterations, I want to put some other piece of data under col again (In a different row). How do I know where exactly col comes? (And there are a lot of columns like this.)\nHope I am not too vague...","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":798,"Q_Id":1199350,"Users Score":0,"Answer":"Probably either a dict of list or a list of dict. Personally, I'd go with the former. So, parse the heading row of the CSV to get a dict from column heading to column index. Then when you're reading through each row, work out what index you're at, grab the column heading, and then append to the end of the list for that column heading.","Q_Score":2,"Tags":"python,file,csv","A_Id":1199371,"CreationDate":"2009-07-29T10:44:00.000","Title":"Whats the best way of putting tabular data into python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a CSV file which I am processing and putting the processed data into a text file.\nThe entire data that goes into the text file is one big table(comma separated instead of space). My problem is How do I remember the column into which a piece of data goes in the text file?\nFor eg. Assume there is a column called 'col'.\nI just put some data under col. Now after a few iterations, I want to put some other piece of data under col again (In a different row). How do I know where exactly col comes? (And there are a lot of columns like this.)\nHope I am not too vague...","AnswerCount":6,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":798,"Q_Id":1199350,"Users Score":1,"Answer":"Is SQLite an option for you? I know that you have CSV input and output. However, you can import all the data into the SQLite database. Then do all the necessary processing with the power of SQL. Then you can export the results as CSV.","Q_Score":2,"Tags":"python,file,csv","A_Id":1199409,"CreationDate":"2009-07-29T10:44:00.000","Title":"Whats the best way of putting tabular data into python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the cleanest way to add a field to a structured numpy array? Can it be done destructively, or is it necessary to create a new array and copy over the existing fields? Are the contents of each field stored contiguously in memory so that such copying can be done efficiently?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":9033,"Q_Id":1201817,"Users Score":20,"Answer":"If you're using numpy 1.3, there's also numpy.lib.recfunctions.append_fields(). \nFor many installations, you'll need to import numpy.lib.recfunctions to access this. import numpy will not allow one to see the numpy.lib.recfunctions","Q_Score":22,"Tags":"python,numpy","A_Id":1208039,"CreationDate":"2009-07-29T17:24:00.000","Title":"Adding a field to a structured numpy array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types \ni understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out \nif someone could provide examples in python that would be a big help","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":671,"Q_Id":1205449,"Users Score":3,"Answer":"You have to encode your input and your output to something that can be represented by the neural network units. ( for example 1 for \"x has a certain property p\" -1 for \"x doesn't have the property p\" if your units' range is in [-1, 1])\nThe way you encode your input and the way you decode your output depends on what you want to train the neural network for. \nMoreover, there are many \"neural networks\" algoritms and learning rules for different tasks( Back propagation, boltzman machines, self organizing maps).","Q_Score":3,"Tags":"python,neural-network","A_Id":1205509,"CreationDate":"2009-07-30T09:22:00.000","Title":"Neural net input\/output","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types \ni understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out \nif someone could provide examples in python that would be a big help","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":671,"Q_Id":1205449,"Users Score":0,"Answer":"You have to add the number of units for input and output you need for the problem. If the unknown function to approximate depends on n parameter, you will have n input units. The number of output units depends on the nature of the funcion. For real functions with n real parameters you will have one output unit.\nSome problems, for example in forecasting of time series, you will have m output units for the m succesive values of the function. The encoding is important and depends on the choosen algorithm. For example, in backpropagation for feedforward nets, is better to transform, if possible, the greater number of features in discrete inputs, as for classification tasks.\nOther aspect of the encoding is that you have to evaluate the number of input and hidden units in function of the amount of data. Too many units related to data may give poor approximation due the course ff dimensionality problem. In some cases, you may to aggregate some of the input data in some way to avoid that problem or use some reduction mechanism as PCA.","Q_Score":3,"Tags":"python,neural-network","A_Id":20683280,"CreationDate":"2009-07-30T09:22:00.000","Title":"Neural net input\/output","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types \ni understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out \nif someone could provide examples in python that would be a big help","AnswerCount":4,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":671,"Q_Id":1205449,"Users Score":2,"Answer":"Your features must be decomposed into parts that can be represented as real numbers. The magic of a Neural Net is it's a black box, the correct associations will be made (with internal weights) during the training\n\nInputs\nChoose as few features as are needed to accurately describe the situation, then decompose each into a set of real valued numbers.\n\nWeather: [temp today, humidity today, temp yesterday, humidity yesterday...] the association between today's temp and today's humidity is made internally\nTeam stats: [ave height, ave weight, max height, top score,...]\nDice: not sure I understand this one, do you mean how to encode discrete values?*\nComplex number: [a,ai,b,bi,...]\n\n* Discrete valued features are tricky, but can still still be encoded as (0.0,1.0). The problem is they don't provide a gradient to learn the threshold on.\n\nOutputs\nYou decide what you want the output to mean, and then encode your training examples in that format. The fewer output values, the easier to train.\n\nWeather: [tomorrow's chance of rain, tomorrow's temp,...] **\nTeam stats: [chance of winning, chance of winning by more than 20,...]\nComplex number: [x,xi,...]\n\n** Here your training vectors would be: 1.0 if it rained the next day, 0.0 if it didn't\n\nOf course, whether or not the problem can actually be modeled by a neural net is a different question.","Q_Score":3,"Tags":"python,neural-network","A_Id":1207505,"CreationDate":"2009-07-30T09:22:00.000","Title":"Neural net input\/output","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types \ni understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out \nif someone could provide examples in python that would be a big help","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":671,"Q_Id":1205449,"Users Score":0,"Answer":"More complex data usually means adding more neurons in the input and output layers.\nYou can feed each \"field\" of your register, properly encoded as a real value (normalized, etc.) to each input neuron, or maybe you can even decompose even further into bit fields, assigning saturated inputs of 1 or 0 to the neurons... for the output, it depends on how you train the neural network, it will try to mimic the training set outputs.","Q_Score":3,"Tags":"python,neural-network","A_Id":1206597,"CreationDate":"2009-07-30T09:22:00.000","Title":"Neural net input\/output","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I will have to implement a convolution of two functions in Python, but SciPy\/Numpy appear to have functions only for the convolution of two arrays.\nBefore I try to implement this by using the the regular integration expression of convolution, I would like to ask if someone knows of an already available module that performs these operations.\nFailing that, which of the several kinds of integration that SciPy provides is the best suited for this?\nThanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":8768,"Q_Id":1222147,"Users Score":1,"Answer":"Yes, SciPy\/Numpy is mostly concerned about arrays.\nIf you can tolerate an approximate solution, and your functions only operate over a range of value (not infinite) you can fill an array with the values and convolve the arrays.\nIf you want something more \"correct\" calculus-wise you would probably need a powerful solver (mathmatica, maple...)","Q_Score":2,"Tags":"python,convolution","A_Id":1226509,"CreationDate":"2009-08-03T12:42:00.000","Title":"Convolution of two functions in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters).\nI plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results\/graphs etc. for now, csv output based on some input parameters will suffice.\nWhat is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed\/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done.\n-Has anyone had experience going down either path and what are the pitfalls\/gotchas that I should be aware of?\n-What are the reasons why one should be chosen over another? \n-Are there any potential speed\/performance pitfalls\/boosts that I need to be aware of before I start that could influence the design?\n-Is there any project or framework out there to help with this type of task? \n-Edit-\nMore info:\nThe rows will all read all in order, BUT I may need to do some resampling\/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table\/memory somewhere to hold the interpolated\/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample\/interploate to a common higher length) or just regenerate it each time its needed.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":581,"Q_Id":1241758,"Users Score":0,"Answer":"Are you likely to need all rows in order or will you want only specific known rows?\nIf you need to read all the data there isn't much advantage to having it in a database.\nedit: If the code fits in memory then a simple CSV is fine. Plain text data formats are always easier to deal with than opaque ones if you can use them.","Q_Score":0,"Tags":"python,database,database-design,file-io","A_Id":1241784,"CreationDate":"2009-08-06T21:58:00.000","Title":"Store data series in file or database if I want to do row level math operations?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters).\nI plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results\/graphs etc. for now, csv output based on some input parameters will suffice.\nWhat is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed\/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done.\n-Has anyone had experience going down either path and what are the pitfalls\/gotchas that I should be aware of?\n-What are the reasons why one should be chosen over another? \n-Are there any potential speed\/performance pitfalls\/boosts that I need to be aware of before I start that could influence the design?\n-Is there any project or framework out there to help with this type of task? \n-Edit-\nMore info:\nThe rows will all read all in order, BUT I may need to do some resampling\/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table\/memory somewhere to hold the interpolated\/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample\/interploate to a common higher length) or just regenerate it each time its needed.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":581,"Q_Id":1241758,"Users Score":0,"Answer":"What matters most if all data will fit simultaneously into memory. From the size that you give, it seems that this is easily the case (a few megabytes at worst).\nIf so, I would discourage using a relational database, and do all operations directly in Python. Depending on what other processing you need, I would probably rather use binary pickles, than CSV.","Q_Score":0,"Tags":"python,database,database-design,file-io","A_Id":1241787,"CreationDate":"2009-08-06T21:58:00.000","Title":"Store data series in file or database if I want to do row level math operations?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters).\nI plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results\/graphs etc. for now, csv output based on some input parameters will suffice.\nWhat is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed\/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done.\n-Has anyone had experience going down either path and what are the pitfalls\/gotchas that I should be aware of?\n-What are the reasons why one should be chosen over another? \n-Are there any potential speed\/performance pitfalls\/boosts that I need to be aware of before I start that could influence the design?\n-Is there any project or framework out there to help with this type of task? \n-Edit-\nMore info:\nThe rows will all read all in order, BUT I may need to do some resampling\/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table\/memory somewhere to hold the interpolated\/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample\/interploate to a common higher length) or just regenerate it each time its needed.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":581,"Q_Id":1241758,"Users Score":2,"Answer":"\"I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input.\"\nThis is the standard use case for a data warehouse star-schema design. Buy Kimball's The Data Warehouse Toolkit. Read (and understand) the star schema before doing anything else.\n\"What is the best way to store the data and manipulate?\" \nA Star Schema.\nYou can implement this as flat files (CSV is fine) or RDBMS. If you use flat files, you write simple loops to do the math. If you use an RDBMS you write simple SQL and simple loops. \n\"My main concern is speed\/performance as the number of datasets grows\" \nNothing is as fast as a flat file. Period. RDBMS is slower. \nThe RDBMS value proposition stems from SQL being a relatively simple way to specify SELECT SUM(), COUNT() FROM fact JOIN dimension WHERE filter GROUP BY dimension attribute. Python isn't as terse as SQL, but it's just as fast and just as flexible. Python competes against SQL.\n\"pitfalls\/gotchas that I should be aware of?\"\nDB design. If you don't get the star schema and how to separate facts from dimensions, all approaches are doomed. Once you separate facts from dimensions, all approaches are approximately equal.\n\"What are the reasons why one should be chosen over another?\"\nRDBMS slow and flexible. Flat files fast and (sometimes) less flexible. Python levels the playing field.\n\"Are there any potential speed\/performance pitfalls\/boosts that I need to be aware of before I start that could influence the design?\"\nStar Schema: central fact table surrounded by dimension tables. Nothing beats it.\n\"Is there any project or framework out there to help with this type of task?\"\nNot really.","Q_Score":0,"Tags":"python,database,database-design,file-io","A_Id":1245169,"CreationDate":"2009-08-06T21:58:00.000","Title":"Store data series in file or database if I want to do row level math operations?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a solid implementation of an ordered associative array, that is, an ordered dictionary. I want the ordering in terms of keys, not of insertion order.\nMore precisely, I am looking for a space-efficent implementation of a int-to-float (or string-to-float for another use case) mapping structure for which:\n\nOrdered iteration is O(n)\nRandom access is O(1)\n\nThe best I came up with was gluing a dict and a list of keys, keeping the last one ordered with bisect and insert.\nAny better ideas?","AnswerCount":10,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":12867,"Q_Id":1319763,"Users Score":4,"Answer":"An ordered tree is usually better for this cases, but random access is going to be log(n). You should keep into account also insertion and removal costs...","Q_Score":36,"Tags":"python,data-structures,collections,dictionary","A_Id":1319790,"CreationDate":"2009-08-23T22:33:00.000","Title":"Key-ordered dict in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a set of phrases, i would like to filter the set of all phrases that contain any of the other phrases. Contained here means that if a phrase contains all the words of another phrase it should be filtered out. Order of the words within the phrase does not matter.\nWhat i have so far is this:\n\nSort the set by the number of words in each phrase. \nFor each phrase X in the set:\n\n\nFor each phrase Y in the rest of the set:\n\n\nIf all the words in X are in Y then X is contained in Y, discard Y.\n\n\n\nThis is slow given a list of about 10k phrases.\nAny better options?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":709,"Q_Id":1372531,"Users Score":1,"Answer":"You could build an index which maps words to phrases and do something like:\n\nlet matched = set of all phrases\nfor each word in the searched phrase\n let wordMatch = all phrases containing the current word\n let matched = intersection of matched and wordMatch\n\nAfter this, matched would contain all phrases matching all words in the target phrase. It could be pretty well optimized by initializing matched to the set of all phrases containing only words[0], and then only iterating over words[1..words.length]. Filtering phrases which are too short to match the target phrase may improve performance, too.\nUnless I'm mistaken, a simple implementation has a worst case complexity (when the search phrase matches all phrases) of O(n\u00b7m), where n is the number of words in the search phrase, and m is the number of phrases.","Q_Score":1,"Tags":"c#,java,c++,python,algorithm","A_Id":1372627,"CreationDate":"2009-09-03T10:02:00.000","Title":"Algorithm to filter a set of all phrases containing in other phrase","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a set of phrases, i would like to filter the set of all phrases that contain any of the other phrases. Contained here means that if a phrase contains all the words of another phrase it should be filtered out. Order of the words within the phrase does not matter.\nWhat i have so far is this:\n\nSort the set by the number of words in each phrase. \nFor each phrase X in the set:\n\n\nFor each phrase Y in the rest of the set:\n\n\nIf all the words in X are in Y then X is contained in Y, discard Y.\n\n\n\nThis is slow given a list of about 10k phrases.\nAny better options?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":709,"Q_Id":1372531,"Users Score":0,"Answer":"sort phrases by their contents, i.e., 'Z A' -> 'A Z', then eliminating phrases is easy going from shortest to longer ones.","Q_Score":1,"Tags":"c#,java,c++,python,algorithm","A_Id":1372585,"CreationDate":"2009-09-03T10:02:00.000","Title":"Algorithm to filter a set of all phrases containing in other phrase","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"One of the things I deal with most in data cleaning is missing values. R deals with this well using its \"NA\" missing data label. In python, it appears that I'll have to deal with masked arrays which seem to be a major pain to set up and don't seem to be well documented. Any suggestions on making this process easier in Python? This is becoming a deal-breaker in moving into Python for data analysis. Thanks\nUpdate It's obviously been a while since I've looked at the methods in the numpy.ma module. It appears that at least the basic analysis functions are available for masked arrays, and the examples provided helped me understand how to create masked arrays (thanks to the authors). I would like to see if some of the newer statistical methods in Python (being developed in this year's GSoC) incorporates this aspect, and at least does the complete case analysis.","AnswerCount":4,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":10010,"Q_Id":1377130,"Users Score":4,"Answer":"If you are willing to consider a library, pandas (http:\/\/pandas.pydata.org\/) is a library built on top of numpy which amongst many other things provides:\n\nIntelligent data alignment and integrated handling of missing data: gain automatic label-based alignment in computations and easily manipulate messy data into an orderly form\n\nI've been using it for almost one year in the financial industry where missing and badly aligned data is the norm and it really made my life easier.","Q_Score":11,"Tags":"python,numpy,data-analysis","A_Id":11086822,"CreationDate":"2009-09-04T03:44:00.000","Title":"How do you deal with missing data using numpy\/scipy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have functions that contribute to small parts of a figure generation. I'm trying to use these functions to generate multiple figures? So something like this:\n\nwork with Figure 1\ndo something else\nwork with Figure 2\ndo something else\nwork with Figure 1\ndo something else\nwork with Figure 2\n\nIf anyone could help, that'd be great!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43360,"Q_Id":1401102,"Users Score":0,"Answer":"The best way to show multiple figures is use matplotlib or pylab. (for windows)\nwith matplotlib you can prepare the figures and then when you finish the process with them you can show with the comand \"matplotlib.show()\" and all figures should be shown.\n(on linux) you don\u00b4t have problems adding changes to figures because the interactive mode is enable (on windows the interactive mode don't work OK).","Q_Score":32,"Tags":"python,matplotlib,figures","A_Id":14591411,"CreationDate":"2009-09-09T17:57:00.000","Title":"Python with matplotlib - drawing multiple figures in parallel","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm removing an item from an array if it exists.\nTwo ways I can think of to do this\nWay #1\n\n# x array, r item to remove\nif r in x :\n x.remove( r )\n\nWay #2\n\ntry :\n x.remove( r )\nexcept :\n pass\n\nTiming it shows the try\/except way can be faster\n(some times i'm getting:)\n\n1.16225508968e-06\n8.80804972547e-07\n\n1.14314196588e-06\n8.73752536492e-07\n\n\nimport timeit\n\nruns = 10000\nx = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',\n 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',\n 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',\n 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',\n]\nr = 'a'\n\ncode1 =\"\"\"\nx = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',\n 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',\n 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',\n 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',\n]\nr = 'a'\n\nif r in x :\n x.remove(r)\n\"\"\"\nprint timeit.Timer( code1 ).timeit( runs ) \/ runs\n\ncode2 =\"\"\"\nx = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',\n 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',\n 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',\n 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',\n]\nr = 'a'\n\ntry :\n x.remove( r )\nexcept :\n pass\n\"\"\"\nprint timeit.Timer( code2 ).timeit( runs ) \/ runs\n\n\nWhich is more pythonic?","AnswerCount":5,"Available Count":3,"Score":0.1194272985,"is_accepted":false,"ViewCount":267,"Q_Id":1418266,"Users Score":3,"Answer":"Speed depends on the ratio of hits to misses. To be pythonic choose the clearer method.\nPersonally I think way#1 is clearer (It takes less lines to have an 'if' block rather than an exception block and also uses less brain space). It will also be faster when there are more hits than misses (an exception is more expensive than skipping a if block).","Q_Score":2,"Tags":"python","A_Id":1418321,"CreationDate":"2009-09-13T17:15:00.000","Title":"Which is more pythonic for array removal?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm removing an item from an array if it exists.\nTwo ways I can think of to do this\nWay #1\n\n# x array, r item to remove\nif r in x :\n x.remove( r )\n\nWay #2\n\ntry :\n x.remove( r )\nexcept :\n pass\n\nTiming it shows the try\/except way can be faster\n(some times i'm getting:)\n\n1.16225508968e-06\n8.80804972547e-07\n\n1.14314196588e-06\n8.73752536492e-07\n\n\nimport timeit\n\nruns = 10000\nx = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',\n 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',\n 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',\n 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',\n]\nr = 'a'\n\ncode1 =\"\"\"\nx = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',\n 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',\n 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',\n 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',\n]\nr = 'a'\n\nif r in x :\n x.remove(r)\n\"\"\"\nprint timeit.Timer( code1 ).timeit( runs ) \/ runs\n\ncode2 =\"\"\"\nx = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',\n 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',\n 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',\n 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',\n]\nr = 'a'\n\ntry :\n x.remove( r )\nexcept :\n pass\n\"\"\"\nprint timeit.Timer( code2 ).timeit( runs ) \/ runs\n\n\nWhich is more pythonic?","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":267,"Q_Id":1418266,"Users Score":2,"Answer":"The try\/except way","Q_Score":2,"Tags":"python","A_Id":1418275,"CreationDate":"2009-09-13T17:15:00.000","Title":"Which is more pythonic for array removal?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm removing an item from an array if it exists.\nTwo ways I can think of to do this\nWay #1\n\n# x array, r item to remove\nif r in x :\n x.remove( r )\n\nWay #2\n\ntry :\n x.remove( r )\nexcept :\n pass\n\nTiming it shows the try\/except way can be faster\n(some times i'm getting:)\n\n1.16225508968e-06\n8.80804972547e-07\n\n1.14314196588e-06\n8.73752536492e-07\n\n\nimport timeit\n\nruns = 10000\nx = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',\n 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',\n 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',\n 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',\n]\nr = 'a'\n\ncode1 =\"\"\"\nx = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',\n 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',\n 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',\n 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',\n]\nr = 'a'\n\nif r in x :\n x.remove(r)\n\"\"\"\nprint timeit.Timer( code1 ).timeit( runs ) \/ runs\n\ncode2 =\"\"\"\nx = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',\n 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',\n 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',\n 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',\n]\nr = 'a'\n\ntry :\n x.remove( r )\nexcept :\n pass\n\"\"\"\nprint timeit.Timer( code2 ).timeit( runs ) \/ runs\n\n\nWhich is more pythonic?","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":267,"Q_Id":1418266,"Users Score":6,"Answer":"I've always gone with the first method. if in reads far more clearly than exception handling does.","Q_Score":2,"Tags":"python","A_Id":1418310,"CreationDate":"2009-09-13T17:15:00.000","Title":"Which is more pythonic for array removal?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to create a 2d matrix of integers with symmetric addressing ( i.e. matrix[2,3] and matrix[3,2] will return the same value ) in python. The integers will have addition and subtraction done on them, and be used for logical comparisons. My initial idea was to create the integer objects up front and try to fill a list of lists with some python equivalent of pointers. I'm not sure how to do it, though. What is the best way to implement this, and should I be using lists or another data structure?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":247,"Q_Id":1425162,"Users Score":1,"Answer":"You only need to store the lower triangle of the matrix. Typically this is done with one n(n+1)\/2 length list. You'll need to overload the __getitem__ method to interpret what the entry means.","Q_Score":1,"Tags":"python,data-structures,matrix","A_Id":1425181,"CreationDate":"2009-09-15T04:48:00.000","Title":"Symmetrically adressable matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to create a 2d matrix of integers with symmetric addressing ( i.e. matrix[2,3] and matrix[3,2] will return the same value ) in python. The integers will have addition and subtraction done on them, and be used for logical comparisons. My initial idea was to create the integer objects up front and try to fill a list of lists with some python equivalent of pointers. I'm not sure how to do it, though. What is the best way to implement this, and should I be using lists or another data structure?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":247,"Q_Id":1425162,"Users Score":2,"Answer":"You're probably better off using a full square numpy matrix. Yes, it wastes half the memory storing redundant values, but rolling your own symmetric matrix in Python will waste even more memory and CPU by storing and processing the integers as Python objects.","Q_Score":1,"Tags":"python,data-structures,matrix","A_Id":1425305,"CreationDate":"2009-09-15T04:48:00.000","Title":"Symmetrically adressable matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm attempting to shorten the memory footprint of 10B sequential integers by referencing them as indexes in a boolean array. In other words, I need to create an array of 10,000,000,000 elements, but that's well into the \"Long\" range. When I try to reference an array index greater than sys.maxint the array blows up:\n\nx = [False] * 10000000000\nTraceback (most recent call last):\n File \"\", line 1, in \n x = [0] * 10000000000\nOverflowError: cannot fit 'long' into an index-sized integer\n\nAnything I can do? I can't seem to find anyone on the net having this problem... Presumably the answer is \"python can't handle arrays bigger than 2B.\"","AnswerCount":6,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":4919,"Q_Id":1436411,"Users Score":2,"Answer":"10B booleans (1.25MB of memory, assuming Python is sane)\n\nI think you have your arithmetic wrong -- stored supercompactly, 10B booleans would be 1.25 GIGA, _not__ MEGA, bytes.\nA list takes at least 4 bytes per item, so you'd need 40GB to do it the way you want.\nYou can store an array (see the array module in the standard library) in much less memory than that, so it might possibly fit.","Q_Score":2,"Tags":"python","A_Id":1436482,"CreationDate":"2009-09-17T02:18:00.000","Title":"long-index arrays in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm attempting to shorten the memory footprint of 10B sequential integers by referencing them as indexes in a boolean array. In other words, I need to create an array of 10,000,000,000 elements, but that's well into the \"Long\" range. When I try to reference an array index greater than sys.maxint the array blows up:\n\nx = [False] * 10000000000\nTraceback (most recent call last):\n File \"\", line 1, in \n x = [0] * 10000000000\nOverflowError: cannot fit 'long' into an index-sized integer\n\nAnything I can do? I can't seem to find anyone on the net having this problem... Presumably the answer is \"python can't handle arrays bigger than 2B.\"","AnswerCount":6,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":4919,"Q_Id":1436411,"Users Score":3,"Answer":"A dense bit vector is plausible but it won't be optimal unless you know you won't have more than about 10**10 elements, all clustered near each other, with a reasonably randomized distribution. If you have a different distribution, then a different structure will be better. \nFor instance, if you know that in that range, [0,10**10), only a few members are present, use a set(), or if the reverse is true, with nearly every element present except for a fraction, use a negated set, ie element not in mySet. \nIf the elements tend to cluster around small ranges, you could use a run length encoding, something like [xrange(0,10),xrange(10,15),xrange(15,100)], which you lookup into by bisecting until you find a matching range, and if the index is even, then the element is in the set. inserts and removals involve shuffling the ranges a bit.\nIf your distribution really is dense, but you need a little more than what fits in memory (seems to be typical in practice) then you can manage memory by using mmap and wrapping the mapped file with an adaptor that uses a similar mechanism to the suggested array('I') solution already suggested.\nTo get an idea of just how compressible you can possibly get, try building a plain file with a reasonable corpus of data in packed form and then apply a general compression algorithm (such as gzip) to see how much reduction you see. If there is much reduction, then you can probably use some sort of space optimization in your code as well.","Q_Score":2,"Tags":"python","A_Id":1436547,"CreationDate":"2009-09-17T02:18:00.000","Title":"long-index arrays in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know that __builtin__ sorted() function works on any iterable. But can someone explain this huge (10x) performance difference between anylist.sort() vs sorted(anylist) ? Also, please point out if I am doing anything wrong with way this is measured. \n\n\"\"\"\nExample Output:\n$ python list_sort_timeit.py \nUsing sort method: 20.0662879944\nUsing sorted builin method: 259.009809017\n\"\"\"\n\nimport random\nimport timeit\n\nprint 'Using sort method:',\nx = min(timeit.Timer(\"test_list1.sort()\",\"import random;test_list1=random.sample(xrange(1000),1000)\").repeat())\nprint x\n\nprint 'Using sorted builin method:',\nx = min(timeit.Timer(\"sorted(test_list2)\",\"import random;test_list2=random.sample(xrange(1000),1000)\").repeat())\nprint x\n\n\nAs the title says, I was interested in comparing list.sort() vs sorted(list). The above snippet showed something interesting that, python's sort function behaves very well for already sorted data. As pointed out by Anurag, in the first case, the sort method is working on already sorted data and while in second sorted it is working on fresh piece to do work again and again. \nSo I wrote this one to test and yes, they are very close.\n\n\"\"\"\nExample Output:\n$ python list_sort_timeit.py \nUsing sort method: 19.0166599751\nUsing sorted builin method: 23.203567028\n\"\"\"\n\nimport random\nimport timeit\n\nprint 'Using sort method:',\nx = min(timeit.Timer(\"test_list1.sort()\",\"import random;test_list1=random.sample(xrange(1000),1000);test_list1.sort()\").repeat())\nprint x\n\nprint 'Using sorted builin method:',\nx = min(timeit.Timer(\"sorted(test_list2)\",\"import random;test_list2=random.sample(xrange(1000),1000);test_list2.sort()\").repeat())\nprint x\n\nOh, I see Alex Martelli with a response, as I was typing this one.. ( I shall leave the edit, as it might be useful).","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":63711,"Q_Id":1436962,"Users Score":9,"Answer":"Well, the .sort() method of lists sorts the list in place, while sorted() creates a new list. So if you have a large list, part of your performance difference will be due to copying.\nStill, an order of magnitude difference seems larger than I'd expect. Perhaps list.sort() has some special-cased optimization that sorted() can't make use of. For example, since the list class already has an internal Py_Object*[] array of the right size, perhaps it can perform swaps more efficiently.\nEdit: Alex and Anurag are right, the order of magnitude difference is due to you accidentally sorting an already-sorted list in your test case. However, as Alex's benchmarks show, list.sort() is about 2% faster than sorted(), which would make sense due to the copying overhead.","Q_Score":40,"Tags":"python,sorting","A_Id":1436981,"CreationDate":"2009-09-17T06:07:00.000","Title":"Python sort() method on list vs builtin sorted() function","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to plot some data in ipy.\nThe data consists of three variables alpha,beta and delta. The alpha and beta values are the coordinates of the data points that I wish to plot using a hammer projection. I want to scale the colour of the markers according to the delta values, preferably in a rainbow scale colormap i.e. from red to blue. The delta values range from 0-13 and I want a linear colour correlation.\nCan anyone please help me, I am getting very frustrated.\nMany thanks\nAngela","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":279,"Q_Id":1448582,"Users Score":0,"Answer":"Well i can\\t quite get exactly what are you looking for, whole solution or maybe you've got problem with colors, if that second thing below you got some ideas\nThe most simple solution would be to create 14-elements array with color that you write from hand with help of some graphic software and than simply fetch element with delta number while drawing\nYou can also search some algorithm to draw gradient, but instead of drawing store its color to array, if you wil genrate 130 values then you can get your color like that \nmycolors[delta*10]\nBe more specific with your question, maybe then more people will abe bale to help you\ni hope that my answer in some way helps\nMTH","Q_Score":0,"Tags":"ironpython","A_Id":1448617,"CreationDate":"2009-09-19T13:47:00.000","Title":"Colour map in ipython","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to:\n\nOpen a video file\nIterate over the frames of the file as images\nDo some analysis in this image frame of the video\nDraw in this image of the video\nCreate a new video with these changes\n\nOpenCV isn't working for my webcam, but python-gst is working. Is this possible using python-gst?\nThank you!","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":12979,"Q_Id":1480431,"Users Score":7,"Answer":"Do you mean opencv can't connect to your webcam or can't read video files recorded by it?\nHave you tried saving the video in an other format?\nOpenCV is probably the best supported python image processing tool","Q_Score":13,"Tags":"python,image-processing,video-processing","A_Id":1480450,"CreationDate":"2009-09-26T04:23:00.000","Title":"Most used Python module for video processing?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to:\n\nOpen a video file\nIterate over the frames of the file as images\nDo some analysis in this image frame of the video\nDraw in this image of the video\nCreate a new video with these changes\n\nOpenCV isn't working for my webcam, but python-gst is working. Is this possible using python-gst?\nThank you!","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":12979,"Q_Id":1480431,"Users Score":0,"Answer":"Just build a C\/C++ wrapper for your webcam and then use SWIG or SIP to access these functions from Python. Then use OpenCV in Python that's the best open sourced computer vision library in the wild.\nIf you worry for performance and you work under Linux, you could download free versions of Intel Performance Primitives (IPP) that could be loaded at runtime with zero effort from OpenCV. For certain algorithms you could get a 200% boost of performances, plus automatic multicore support for most of time consuming functions.","Q_Score":13,"Tags":"python,image-processing,video-processing","A_Id":1497418,"CreationDate":"2009-09-26T04:23:00.000","Title":"Most used Python module for video processing?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a plot of two boxplots in the same figure. For style reasons, the axis should have the same length, so that the graphic box is square. I tried to use the set_aspect method, but the axes are too different because of their range\nand the result is terrible.\nIs it possible to have 1:1 axes even if they do not have the same number of points?","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":8361,"Q_Id":1506647,"Users Score":3,"Answer":"Try axis('equal'). It's been a while since I worked with matplotlib, but I seem to remember typing that command a lot.","Q_Score":4,"Tags":"python,matplotlib,boxplot","A_Id":1506741,"CreationDate":"2009-10-01T21:37:00.000","Title":"Matplotlib square boxplot","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What algorithm is the built in sort() method in Python using? Is it possible to have a look at the code for that method?","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":71971,"Q_Id":1517347,"Users Score":10,"Answer":"In early python-versions, the sort function implemented a modified version of quicksort.\nHowever, it was deemed unstable and as of 2.3 they switched to using an adaptive mergesort algorithm.","Q_Score":128,"Tags":"python,algorithm,sorting,python-internals","A_Id":1517357,"CreationDate":"2009-10-04T20:48:00.000","Title":"About Python's built in sort() method","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I check which version of NumPy I'm using?","AnswerCount":17,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":514245,"Q_Id":1520234,"Users Score":0,"Answer":"It is good to know the version of numpy you run, but strictly speaking if you just need to have specific version on your system you can write like this:\npip install numpy==1.14.3 and this will install the version you need and uninstall other versions of numpy.","Q_Score":340,"Tags":"python,numpy,version","A_Id":53898417,"CreationDate":"2009-10-05T13:56:00.000","Title":"How do I check which version of NumPy I'm using?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I check which version of NumPy I'm using?","AnswerCount":17,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":514245,"Q_Id":1520234,"Users Score":11,"Answer":"You can try this:\n\npip show numpy","Q_Score":340,"Tags":"python,numpy,version","A_Id":46330631,"CreationDate":"2009-10-05T13:56:00.000","Title":"How do I check which version of NumPy I'm using?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a series of large text files (up to 1 gig) that are output from an experiment that need to be analysed in Python. They would be best loaded into a 2D numpy array, which presents the first question:\n\nAs the number of rows is unknown at the beginning of the loading, how can\na very large numpy array be most efficiently built, row by row?\n\nSimply adding the row to the array would be inefficient in memory terms, as two large arrays would momentarily co-exist. The same problem would seem to be occur if you use numpy.append. The stack functions are promising, but ideally I would want to grow the array in place.\nThis leads to the second question:\n\nWhat is the best way to observe the memory usage of a Python program that heavily\nuses numpy arrays?\n\nTo study the above problem, I've used the usual memory profiling tools - heapy and pympler - but am only getting the size of the outer array objects (80 bytes) and not the data they are containing. Asides from a crude measuring of how much memory the Python process is using, how can I get at the \"full\" size of the arrays as they grow?\nLocal details: OSX 10.6, Python 2.6, but general solutions are welcome.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":6464,"Q_Id":1530960,"Users Score":1,"Answer":"On possible option is to do a single pass through the file first to count the number of rows, without loading them.\nThe other option is to double your table size each time, which has two benefits:\n\nYou will only re-alloc memory log(n) times where n is the number of rows.\nYou only need 50% more ram than your largest table size\n\nIf you take the dynamic route, you could measure the length of the first row in bytes, then guess the number of rows by calculating (num bytes in file \/ num bytes in first row). Start with a table of this size.","Q_Score":11,"Tags":"python,numpy,memory-management","A_Id":1534340,"CreationDate":"2009-10-07T11:08:00.000","Title":"Incrementally building a numpy array and measuring memory usage","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for Python implementation of k-means algorithm with examples to cluster and cache my database of coordinates.","AnswerCount":8,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89635,"Q_Id":1545606,"Users Score":0,"Answer":"You can also use GDAL, which has many many functions to work with spatial data.","Q_Score":49,"Tags":"python,algorithm,cluster-analysis,k-means","A_Id":1545672,"CreationDate":"2009-10-09T19:16:00.000","Title":"Python k-means algorithm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to profile some VCS software, and to do so I want to generate a set of random files, in randomly arranged directories. I'm writing the script in Python, but my question is briefly: how do I generate a random directory tree with an average number of sub-directories per directory and some broad distribution of files per directory?\nClarification: I'm not comparing different VCS repo formats (eg. SVN vs Git vs Hg), but profiling software that deals with SVN (and eventually other) working copies and repos.\nThe constraints I'd like are to specify the total number of files (call it 'N', probably ~10k-100k) and the maximum depth of the directory structure ('L', probably 2-10). I don't care how many directories are generated at each level, and I don't want to end up with 1 file per dir, or 100k all in one dir.\nThe distribution is something I'm not sure about, since I don't know whether VCS' (SVN in particular) would perform better or worse with a very uniform structure or a very skewed structure. Nonetheless, it would be nice if I could come up with an algorithm that didn't \"even out\" for large numbers.\nMy first thoughts were: generate the directory tree using some method, and then uniformly populate the tree with files (treating each dir equally, with no regard as to nesting). My back-of-the-envelope calcs tell me that if there are 'L' levels, with 'D' subdirs per dir, and about sqrt(N) files per dir, then there will be about D^L dirs, so N =~ sqrt(N)*(D^L) => D =~ N^(1\/2L). So now I have an approximate value for 'D', how can I generate the tree? How do I populate the files?\nI'd be grateful just for some pointers to good resources on algorithms I could use. My searching only found pretty applets\/flash.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3603,"Q_Id":1553114,"Users Score":5,"Answer":"Why not download some real open source repos and use those?\nHave you thought about what goes into the files? is that random data too?","Q_Score":2,"Tags":"python,algorithm","A_Id":1553126,"CreationDate":"2009-10-12T07:06:00.000","Title":"Generate random directories\/files given number of files and depth","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list. It contains x lists, each with y elements.\nI want to pair each element with all the other elements, just once, (a,b = b,a)\nEDIT: this has been criticized as being too vague.So I'll describe the history.\nMy function produces random equations and using genetic techniques, mutates and crossbreeds them, selecting for fitness.\nAfter a number of iterations, it returns a list of 12 objects, sorted by fitness of their 'equation' attribute.\nUsing the 'parallel python' module to run this function 8 times, a list containing 8 lists of 12 objects (each with an equation attribute) each is returned.\nNow, within each list, the 12 objects have already been cross-bread with each other.\nI want to cross-breed each object in a list with all the other objects in all the other lists, but not with the objects within it's own list with which it has already been cross-bread. (whew!)","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":598,"Q_Id":1591762,"Users Score":1,"Answer":"You haven't made it completely clear what you need. It sounds like itertools should have what you need. Perhaps what you wish is an itertools.combinations of the itertools.product of the lists in your big list. \n@fortran: you can't have a set of sets. You can have a set of frozensets, but depending on what it really means to have duplicates here, that might not be what is needed.","Q_Score":2,"Tags":"python","A_Id":1592512,"CreationDate":"2009-10-19T23:51:00.000","Title":"How to make all combinations of the elements in an array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list. It contains x lists, each with y elements.\nI want to pair each element with all the other elements, just once, (a,b = b,a)\nEDIT: this has been criticized as being too vague.So I'll describe the history.\nMy function produces random equations and using genetic techniques, mutates and crossbreeds them, selecting for fitness.\nAfter a number of iterations, it returns a list of 12 objects, sorted by fitness of their 'equation' attribute.\nUsing the 'parallel python' module to run this function 8 times, a list containing 8 lists of 12 objects (each with an equation attribute) each is returned.\nNow, within each list, the 12 objects have already been cross-bread with each other.\nI want to cross-breed each object in a list with all the other objects in all the other lists, but not with the objects within it's own list with which it has already been cross-bread. (whew!)","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":598,"Q_Id":1591762,"Users Score":0,"Answer":"First of all, please don't refer to this as an \"array\". You are using a list of lists. In Python, an array is a different type of data structure, provided by the array module. \nAlso, your application sounds suspiciously like a matrix. If you are really doing matrix manipulations, you should investigate the Numpy package.\nAt first glance your problem sounded like something that the zip() function would solve or itertools.izip(). You should definitely read through the docs for the itertools module because it has various list manipulations and they will run faster than anything you could write yourself in Python.","Q_Score":2,"Tags":"python","A_Id":1591802,"CreationDate":"2009-10-19T23:51:00.000","Title":"How to make all combinations of the elements in an array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program that does something like this:\n\nRead a row from a csv file.\nDo some transformations on it.\nBreak it up into the actual rows as they would be written to the database.\nWrite those rows to individual csv files.\nGo back to step 1 unless the file has been totally read.\nRun SQL*Loader and load those files into the database.\n\nStep 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.\nThere are a few ideas that I have to solve this:\n\nRead the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?\nParallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.\nBreak the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.\n\nOf course, the correct answer to this question is \"do what you find to be the fastest by testing.\" However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?","AnswerCount":7,"Available Count":5,"Score":0.0855049882,"is_accepted":false,"ViewCount":2504,"Q_Id":1594604,"Users Score":3,"Answer":"If you are I\/O bound, the best way I have found to optimize is to read or write the entire file into\/out of memory at once, then operate out of RAM from there on.\nWith extensive testing I found that my runtime eded up bound not by the amount of data I read from\/wrote to disk, but by the number of I\/O operations I used to do it. That is what you need to optimize.\nI don't know Python, but if there is a way to tell it to write the whole file out of RAM in one go, rather than issuing a separate I\/O for each byte, that's what you need to do.\nOf course the drawback to this is that files can be considerably larger than available RAM. There are lots of ways to deal with that, but that is another question for another time.","Q_Score":3,"Tags":"python,performance,optimization,file-io","A_Id":1594704,"CreationDate":"2009-10-20T13:27:00.000","Title":"How should I optimize this filesystem I\/O bound program?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program that does something like this:\n\nRead a row from a csv file.\nDo some transformations on it.\nBreak it up into the actual rows as they would be written to the database.\nWrite those rows to individual csv files.\nGo back to step 1 unless the file has been totally read.\nRun SQL*Loader and load those files into the database.\n\nStep 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.\nThere are a few ideas that I have to solve this:\n\nRead the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?\nParallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.\nBreak the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.\n\nOf course, the correct answer to this question is \"do what you find to be the fastest by testing.\" However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?","AnswerCount":7,"Available Count":5,"Score":0.0285636566,"is_accepted":false,"ViewCount":2504,"Q_Id":1594604,"Users Score":1,"Answer":"Use buffered writes for step 4.\nWrite a simple function that simply appends the output onto a string, checks the string length, and only writes when you have enough which should be some multiple of 4k bytes. I would say start with 32k buffers and time it.\nYou would have one buffer per file, so that most \"writes\" won't actually hit the disk.","Q_Score":3,"Tags":"python,performance,optimization,file-io","A_Id":1595358,"CreationDate":"2009-10-20T13:27:00.000","Title":"How should I optimize this filesystem I\/O bound program?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program that does something like this:\n\nRead a row from a csv file.\nDo some transformations on it.\nBreak it up into the actual rows as they would be written to the database.\nWrite those rows to individual csv files.\nGo back to step 1 unless the file has been totally read.\nRun SQL*Loader and load those files into the database.\n\nStep 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.\nThere are a few ideas that I have to solve this:\n\nRead the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?\nParallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.\nBreak the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.\n\nOf course, the correct answer to this question is \"do what you find to be the fastest by testing.\" However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?","AnswerCount":7,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":2504,"Q_Id":1594604,"Users Score":3,"Answer":"Python already does IO buffering and the OS should handle both prefetching the input file and delaying writes until it needs the RAM for something else or just gets uneasy about having dirty data in RAM for too long. Unless you force the OS to write them immediately, like closing the file after each write or opening the file in O_SYNC mode.\nIf the OS isn't doing the right thing, you can try raising the buffer size (third parameter to open()). For some guidance on appropriate values given a 100MB\/s 10ms latency IO system a 1MB IO size will result in approximately 50% latency overhead, while a 10MB IO size will result in 9% overhead. If its still IO bound, you probably just need more bandwidth. Use your OS specific tools to check what kind of bandwidth you are getting to\/from the disks.\nAlso useful is to check if step 4 is taking a lot of time executing or waiting on IO. If it's executing you'll need to spend more time checking which part is the culprit and optimize that, or split out the work to different processes.","Q_Score":3,"Tags":"python,performance,optimization,file-io","A_Id":1595626,"CreationDate":"2009-10-20T13:27:00.000","Title":"How should I optimize this filesystem I\/O bound program?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program that does something like this:\n\nRead a row from a csv file.\nDo some transformations on it.\nBreak it up into the actual rows as they would be written to the database.\nWrite those rows to individual csv files.\nGo back to step 1 unless the file has been totally read.\nRun SQL*Loader and load those files into the database.\n\nStep 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.\nThere are a few ideas that I have to solve this:\n\nRead the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?\nParallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.\nBreak the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.\n\nOf course, the correct answer to this question is \"do what you find to be the fastest by testing.\" However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?","AnswerCount":7,"Available Count":5,"Score":0.057080742,"is_accepted":false,"ViewCount":2504,"Q_Id":1594604,"Users Score":2,"Answer":"Can you use a ramdisk for step 4? Low millions sounds doable if the rows are less than a couple of kB or so.","Q_Score":3,"Tags":"python,performance,optimization,file-io","A_Id":1597062,"CreationDate":"2009-10-20T13:27:00.000","Title":"How should I optimize this filesystem I\/O bound program?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program that does something like this:\n\nRead a row from a csv file.\nDo some transformations on it.\nBreak it up into the actual rows as they would be written to the database.\nWrite those rows to individual csv files.\nGo back to step 1 unless the file has been totally read.\nRun SQL*Loader and load those files into the database.\n\nStep 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.\nThere are a few ideas that I have to solve this:\n\nRead the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?\nParallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.\nBreak the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.\n\nOf course, the correct answer to this question is \"do what you find to be the fastest by testing.\" However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?","AnswerCount":7,"Available Count":5,"Score":0.0285636566,"is_accepted":false,"ViewCount":2504,"Q_Id":1594604,"Users Score":1,"Answer":"Isn't it possible to collect a few thousand rows in ram, then go directly to the database server and execute them? \nThis would remove the save to and load from the disk that step 4 entails.\nIf the database server is transactional, this is also a safe way to do it - just have the database begin before your first row and commit after the last.","Q_Score":3,"Tags":"python,performance,optimization,file-io","A_Id":1597281,"CreationDate":"2009-10-20T13:27:00.000","Title":"How should I optimize this filesystem I\/O bound program?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose my attempt to write a pickle object out to disk is incomplete due to a crash. Will an attempt to unpickle the object always lead to an exception or is it possible that the fragment that was written out may be interpreted as valid pickle and the error go unnoticed?","AnswerCount":5,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":1351,"Q_Id":1653897,"Users Score":7,"Answer":"Contra the other answers offered, I believe that we can make a strong argument about the recoverability of a pickle. That answer is: \"Yes, an incomplete pickle always leads to an exception.\"\nWhy are we able to do this? Because the \"pickle\" format is in fact a small stack-based language. In a stack-based language you write code that pushes item after item on a stack, then invoke an operator that does something with the data you've accumulated. And it just so happens that a pickle has to end with the command \".\", which says: \"take the item now at the bottom of the stack and return it as the value of this pickle.\" If your pickle is chopped off early, it will not end with this command, and you will get an EOF error.\nIf you want to try recovering some of the data, you might have to write your own interpreter, or call into pickle.py somewhere that gets around its wanting to raise EOFError when done interpreting the stack without finding a \".\". The main thing to keep in mind is that, as in most stack-based languages, big data structures are built \"backwards\": first you put lots of little strings or numbers on the stack, then you invoke an operation that says \"put those together into a list\" or \"grab pairs of items on the stack and make a dictionary\". So, if a pickle is interrupted, you'll find the stack full of pieces of the object that was going to be built, but you'll be missing that final code that tells you what was going to be built from the pieces.","Q_Score":4,"Tags":"python","A_Id":1654390,"CreationDate":"2009-10-31T09:38:00.000","Title":"If pickling was interrupted, will unpickling necessarily always fail? - Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose my attempt to write a pickle object out to disk is incomplete due to a crash. Will an attempt to unpickle the object always lead to an exception or is it possible that the fragment that was written out may be interpreted as valid pickle and the error go unnoticed?","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":1351,"Q_Id":1653897,"Users Score":1,"Answer":"I doubt you could make a claim that it will always lead to an exception. Pickles are actually programs written in a specialized stack language. The internal details of pickles change from version to version, and new pickle protocols are added occasionally. The state of the pickle after a crash, and the resulting effects on the unpickler, would be very difficult to summarize in a simple statement like \"it will always lead to an exception\".","Q_Score":4,"Tags":"python","A_Id":1654321,"CreationDate":"2009-10-31T09:38:00.000","Title":"If pickling was interrupted, will unpickling necessarily always fail? - Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose my attempt to write a pickle object out to disk is incomplete due to a crash. Will an attempt to unpickle the object always lead to an exception or is it possible that the fragment that was written out may be interpreted as valid pickle and the error go unnoticed?","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":1351,"Q_Id":1653897,"Users Score":1,"Answer":"To be sure that you have a \"complete\" pickle file, you need to pickle three things.\n\nPickle a header of some kind that claims how many objects and what the end-of-file flag will look like. A tuple of an integer and the EOF string, for example.\nPickle the objects you actually care about. The count is given by the header.\nPickle a tail object that you don't actually care about, but which simply matches the claim made in the header. This can be simply a string that matches what was in the header.\n\nWhen you unpickle this file, you have to unpickle three things:\n\nThe header. You care about the count and the form of the tail.\nThe objects you actually care about.\nThe tail object. Check that it matches the header. Other than that, it doesn't convey much except that the file was written in it's entirety.","Q_Score":4,"Tags":"python","A_Id":1654503,"CreationDate":"2009-10-31T09:38:00.000","Title":"If pickling was interrupted, will unpickling necessarily always fail? - Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose my attempt to write a pickle object out to disk is incomplete due to a crash. Will an attempt to unpickle the object always lead to an exception or is it possible that the fragment that was written out may be interpreted as valid pickle and the error go unnoticed?","AnswerCount":5,"Available Count":4,"Score":0.0798297691,"is_accepted":false,"ViewCount":1351,"Q_Id":1653897,"Users Score":2,"Answer":"Pickling an object returns an str object, or writes an str object to a file ... it doesn't modify the original object. If a \"crash\" (exception) happens inside a pickling call, the result won't be returned to the caller, so you don't have anything that you could try to unpickle. Besides, why would you want to unpickle some dud rubbish left over after an exception?","Q_Score":4,"Tags":"python","A_Id":1654329,"CreationDate":"2009-10-31T09:38:00.000","Title":"If pickling was interrupted, will unpickling necessarily always fail? - Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I regularly make figures (the exploratory data analysis type) in R. I also program in Python and was wondering if there are features or concepts in matplotlib that would be worth learning. For instance, I am quite happy with R - but its image() function will produce large files with pixelated output, whereas Matlab's equivalent figure (I also program regularly in Matlab) seems to be manageable in file size and also 'smoothed' - does matplotlib also provide such reductions...? But more generally, I wonder what other advantages matplotlib might confer. I don't mean this to be a trolling question. Thanks.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":10755,"Q_Id":1661479,"Users Score":13,"Answer":"This is a tough one to answer. \nI recently switched some of my graphing workload from R to matplotlib. In my humble opinion, I find matplotlib's graphs to be prettier (better default colors, they look crisper and more modern). I also think matplotlib renders PNGs a whole lot better.\nThe real motivation for me though, was that I wanted to work with my underlying data in Python (and numpy) and not R. I think this is the big question to ask, in which language do you want to load, parse and manipulate your data?\nOn the other hand, a bonus for R is that the plotting defaults just work (there's a function for everything). I find myself frequently digging through the matplotlib docs (they are thick) looking for some obscure way to adjust a border or increase a line thickness. R's plotting routines have some maturity behind them.","Q_Score":11,"Tags":"python,r,matplotlib,scipy,data-visualization","A_Id":1662207,"CreationDate":"2009-11-02T14:01:00.000","Title":"matplotlib for R user?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I regularly make figures (the exploratory data analysis type) in R. I also program in Python and was wondering if there are features or concepts in matplotlib that would be worth learning. For instance, I am quite happy with R - but its image() function will produce large files with pixelated output, whereas Matlab's equivalent figure (I also program regularly in Matlab) seems to be manageable in file size and also 'smoothed' - does matplotlib also provide such reductions...? But more generally, I wonder what other advantages matplotlib might confer. I don't mean this to be a trolling question. Thanks.","AnswerCount":2,"Available Count":2,"Score":0.3799489623,"is_accepted":false,"ViewCount":10755,"Q_Id":1661479,"Users Score":4,"Answer":"I think that the largest advantage is that matplotlib is based on Python, which you say you already know. So, this is one language less to learn. Just spend the time mastering Python, and you'll benefit both directly for the plotting task at hand and indirectly for your other Python needs. \nBesides, IMHO Python is an overall richer language than R, with far more libraries that can help for various tasks. You have to access data for plotting, and data comes in many forms. In whatever form it comes I'm sure Python has an efficient library for it.\nAnd how about embedding those plots in more complete programs, say simple GUIs? matplotlib binds easily with Python's GUI libs (like PyQT) and you can make stuff that only your imagination limits.","Q_Score":11,"Tags":"python,r,matplotlib,scipy,data-visualization","A_Id":1662225,"CreationDate":"2009-11-02T14:01:00.000","Title":"matplotlib for R user?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using their default POS tagging and default tokenization..and it seems sufficient. I'd like their default chunker too.\nI am reading the NLTK toolkit book, but it does not seem like they have a default chunker?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4560,"Q_Id":1687510,"Users Score":9,"Answer":"You can get out of the box named entity chunking with the nltk.ne_chunk() method. It takes a list of POS tagged tuples:\nnltk.ne_chunk([('Barack', 'NNP'), ('Obama', 'NNP'), ('lives', 'NNS'), ('in', 'IN'), ('Washington', 'NNP')])\nresults in:\nTree('S', [Tree('PERSON', [('Barack', 'NNP')]), Tree('ORGANIZATION', [('Obama', 'NNP')]), ('lives', 'NNS'), ('in', 'IN'), Tree('GPE', [('Washington', 'NNP')])])\nIt identifies Barack as a person, but Obama as an organization. So, not perfect.","Q_Score":9,"Tags":"python,nlp,nltk,chunking","A_Id":1687712,"CreationDate":"2009-11-06T13:10:00.000","Title":"What is the default chunker for NLTK toolkit in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example...\nChicken is an animal.\nBurrito is a food.\nWordNet allows you to do \"is-a\"...the hiearchy feature.\nHowever, how do I know when to stop travelling up the tree? I want a LEVEL.\nThat is consistent.\nFor example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing \"burrito\" as a \"thing\" is too broad, yet \"mexican wrapped food\" is too specific. I want to go up the hiearchy or down..until the right LEVEL.","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2585,"Q_Id":1695971,"Users Score":0,"Answer":"sorry, may I ask which tool could judge \"difficulty level\" of sentences?\nI wish to find out \"similar difficulty level\" of sentences for user to read.","Q_Score":6,"Tags":"python,text,nlp,words,wordnet","A_Id":58050062,"CreationDate":"2009-11-08T10:29:00.000","Title":"Does WordNet have \"levels\"? (NLP)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example...\nChicken is an animal.\nBurrito is a food.\nWordNet allows you to do \"is-a\"...the hiearchy feature.\nHowever, how do I know when to stop travelling up the tree? I want a LEVEL.\nThat is consistent.\nFor example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing \"burrito\" as a \"thing\" is too broad, yet \"mexican wrapped food\" is too specific. I want to go up the hiearchy or down..until the right LEVEL.","AnswerCount":5,"Available Count":4,"Score":0.0798297691,"is_accepted":false,"ViewCount":2585,"Q_Id":1695971,"Users Score":2,"Answer":"In order to get levels, you need to predefine the content of each level. An ontology often defines these as the immediate IS_A children of a specific concept, but if that is absent, you need to develop a method of that yourself.\nThe next step is to put a priority on each concept, in case you want to present only one category for each word. The priority can be done in multiple ways, for instance as the count of IS_A relations between the category and the word, or manually selected priorities for each category. For each word, you can then pick the category with the highest priority. For instance, you may want meat to be \"food\" rather than chemical substance.\nYou may also want to pick some words, that change priority if they are in the path. For instance, if you want some chemicals which are also food, to be announced as chemicals, but others should still be food.","Q_Score":6,"Tags":"python,text,nlp,words,wordnet","A_Id":1696133,"CreationDate":"2009-11-08T10:29:00.000","Title":"Does WordNet have \"levels\"? (NLP)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example...\nChicken is an animal.\nBurrito is a food.\nWordNet allows you to do \"is-a\"...the hiearchy feature.\nHowever, how do I know when to stop travelling up the tree? I want a LEVEL.\nThat is consistent.\nFor example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing \"burrito\" as a \"thing\" is too broad, yet \"mexican wrapped food\" is too specific. I want to go up the hiearchy or down..until the right LEVEL.","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2585,"Q_Id":1695971,"Users Score":0,"Answer":"WordNet's hypernym tree ends with a single root synset for the word \"entity\". If you are using WordNet's C library, then you can get a while recursive structure for a synset's ancestors using traceptrs_ds, and you can get the whole synset tree by recursively following nextss and ptrlst pointers until you hit null pointers.","Q_Score":6,"Tags":"python,text,nlp,words,wordnet","A_Id":1717952,"CreationDate":"2009-11-08T10:29:00.000","Title":"Does WordNet have \"levels\"? (NLP)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example...\nChicken is an animal.\nBurrito is a food.\nWordNet allows you to do \"is-a\"...the hiearchy feature.\nHowever, how do I know when to stop travelling up the tree? I want a LEVEL.\nThat is consistent.\nFor example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing \"burrito\" as a \"thing\" is too broad, yet \"mexican wrapped food\" is too specific. I want to go up the hiearchy or down..until the right LEVEL.","AnswerCount":5,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":2585,"Q_Id":1695971,"Users Score":6,"Answer":"[Please credit Pete Kirkham, he first came with the reference to SUMO which may well answer the question asked by Alex, the OP]\n(I'm just providing a complement of information here; I started in a comment field but soon ran out of space and layout capabilites...)\nAlex: Most of SUMO is science or engineering? It does not contain every-day words like foods, people, cars, jobs, etc?\nPete K: SUMO is an upper ontology. The mid-level ontologies (where you would find concepts between 'thing' and 'beef burrito') listed on the page don't include food, but reflect the sorts of organisations which fund the project. There is a mid-level ontology for people. There's also one for industries (and hence jobs), including food suppliers, but no mention of burritos if you grep it.\nMy two cents\n100% of WordNet (3.0 i.e. the latest, as well as older versions) is mapped to SUMO, and that may just be what Alex need. The mid-level ontologies associated with SUMO (or rather with MILO) are effectively in specific domains, and do not, at this time, include Foodstuff, but since WordNet does (include all -well, many of- these everyday things) you do not need to leverage any formal ontology \"under\" SUMO, but instead use Sumo's WordNet mapping (possibly in addition to WordNet, which, again, is not an ontology but with its informal and loose \"hierarchy\" may also help.\nSome difficulty may arise, however, from two area (and then some ;-) ?):\n\nthe SUMO ontology's \"level\" may not be the level you'd have in mind for your particular application. For example while \"Burrito\" brings \"Food\", at top level entity in SUMO \"Chicken\" brings well \"Chicken\" which only through a long chain finds \"Animal\" (specifically: Chicken->Poultry->Bird->Warm_Blooded_Vertebrae->Vertebrae->Animal).\nWordnet's coverage and metadata is impressive, but with regards to the mid-level concepts can be a bit inconsistent. For example \"our\" Burrito's hypernym is appropriately \"Dish\", which provides it with circa 140 food dishes, which includes generics such as \"Soup\" or \"Casserole\" as well as \"Chicken Marengo\" (but omitting say \"Chicken Cacciatore\")\n\nMy point, in bringing up these issues, is not to criticize WordNet or SUMO and its related ontologies, but rather to illustrate simply some of the challenges associated with building ontology, particularly at the mid-level.\nRegardless of some possible flaws and lackings of a solution based on SUMO and WordNet, a pragmatic use of these frameworks may well \"fit the bill\" (85% of the time)","Q_Score":6,"Tags":"python,text,nlp,words,wordnet","A_Id":1698380,"CreationDate":"2009-11-08T10:29:00.000","Title":"Does WordNet have \"levels\"? (NLP)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to write a code in python to solve a sudoku puzzle. Do you guys have any idea about a good algorithm for this purpose. I read somewhere in net about a algorithm which solves it by filling the whole box with all possible numbers, then inserts known values into the corresponding boxes.From the row and coloumn of known values the known value is removed.If you guys know any better algorithm than this please help me to write one. Also I am confused that how i should read the known values from the user. It is really hard to enter the values one by one through console. Any easy way for this other than using gui?","AnswerCount":11,"Available Count":2,"Score":0.0181798149,"is_accepted":false,"ViewCount":147751,"Q_Id":1697334,"Users Score":1,"Answer":"Not gonna write full code, but I did a sudoku solver a long time ago. I found that it didn't always solve it (the thing people do when they have a newspaper is incomplete!), but now think I know how to do it.\n\nSetup: for each square, have a set of flags for each number showing the allowed numbers.\nCrossing out: just like when people on the train are solving it on paper, you can iteratively cross out known numbers. Any square left with just one number will trigger another crossing out. This will either result in solving the whole puzzle, or it will run out of triggers. This is where I stalled last time.\nPermutations: there's only 9! = 362880 ways to arrange 9 numbers, easily precomputed on a modern system. All of the rows, columns, and 3x3 squares must be one of these permutations. Once you have a bunch of numbers in there, you can do what you did with the crossing out. For each row\/column\/3x3, you can cross out 1\/9 of the 9! permutations if you have one number, 1\/(8*9) if you have 2, and so forth.\nCross permutations: Now you have a bunch of rows and columns with sets of potential permutations. But there's another constraint: once you set a row, the columns and 3x3s are vastly reduced in what they might be. You can do a tree search from here to find a solution.","Q_Score":22,"Tags":"python,algorithm,sudoku","A_Id":35500598,"CreationDate":"2009-11-08T17:54:00.000","Title":"Algorithm for solving Sudoku","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to write a code in python to solve a sudoku puzzle. Do you guys have any idea about a good algorithm for this purpose. I read somewhere in net about a algorithm which solves it by filling the whole box with all possible numbers, then inserts known values into the corresponding boxes.From the row and coloumn of known values the known value is removed.If you guys know any better algorithm than this please help me to write one. Also I am confused that how i should read the known values from the user. It is really hard to enter the values one by one through console. Any easy way for this other than using gui?","AnswerCount":11,"Available Count":2,"Score":0.0906594778,"is_accepted":false,"ViewCount":147751,"Q_Id":1697334,"Users Score":5,"Answer":"I wrote a simple program that solved the easy ones. It took its input from a file which was just a matrix with spaces and numbers. The datastructure to solve it was just a 9 by 9 matrix of a bit mask. The bit mask would specify which numbers were still possible on a certain position. Filling in the numbers from the file would reduce the numbers in all rows\/columns next to each known location. When that is done you keep iterating over the matrix and reducing possible numbers. If each location has only one option left you're done. But there are some sudokus that need more work. For these ones you can just use brute force: try all remaining possible combinations until you find one that works.","Q_Score":22,"Tags":"python,algorithm,sudoku","A_Id":1697407,"CreationDate":"2009-11-08T17:54:00.000","Title":"Algorithm for solving Sudoku","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to learn programming in python and am also working against a deadline for setting up a neural network which looks like it's going to feature multidirectional associative memory and recurrent connections among other things. While the mathematics for all these things can be accessed from various texts and sources (and is accessible, so to speak), as a newbie to python (and programming as a profession) I am kinda floating in space looking for the firmament as I try to 'implement' things!! \nInformation on any good online tutorials on constructing neural networks ab initio will be greatly appreciated :) \nIn the meantime I am moonlighting as a MatLab user to nurse the wounds caused by Python :)","AnswerCount":3,"Available Count":1,"Score":0.2605204458,"is_accepted":false,"ViewCount":4234,"Q_Id":1698017,"Users Score":4,"Answer":"If you're familiar with Matlab, check out the excellent Python libraries numpy, scipy, and matplotlib. Together, they provide the most commonly used subset of Matlab functions.","Q_Score":3,"Tags":"python,scipy,neural-network","A_Id":1698110,"CreationDate":"2009-11-08T21:44:00.000","Title":"Neural Networks in Python without using any readymade libraries...i.e., from first principles..help!","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple \"for\" loop) is not very efficient.\nI am using python for my programming, if these is some inbuilt modules for handling these problems, please let me know.\nIf someone knows any algorithm which can be used for finding triangles in graphs, kindly reply back.","AnswerCount":11,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":18355,"Q_Id":1705824,"Users Score":0,"Answer":"Do you need to find 'all' of the 'triangles', or just 'some'\/'any'?\nOr perhaps you just need to test whether a particular node is part of a triangle?\nThe test is simple - given a node A, are there any two connected nodes B & C that are also directly connected.\nIf you need to find all of the triangles - specifically, all groups of 3 nodes in which each node is joined to the other two - then you need to check every possible group in a very long running 'for each' loop.\nThe only optimisation is ensuring that you don't check the same 'group' twice, e.g. if you have already tested that B & C aren't in a group with A, then don't check whether A & C are in a group with B.","Q_Score":10,"Tags":"python,graph,geometry,cycle","A_Id":1705913,"CreationDate":"2009-11-10T05:33:00.000","Title":"Finding cycle of 3 nodes ( or triangles) in a graph","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple \"for\" loop) is not very efficient.\nI am using python for my programming, if these is some inbuilt modules for handling these problems, please let me know.\nIf someone knows any algorithm which can be used for finding triangles in graphs, kindly reply back.","AnswerCount":11,"Available Count":2,"Score":0.0181798149,"is_accepted":false,"ViewCount":18355,"Q_Id":1705824,"Users Score":1,"Answer":"Even though it isn't efficient, you may want to implement a solution, so use the loops. Write a test so you can get an idea as to how long it takes.\nThen, as you try new approaches you can do two things:\n1) Make certain that the answer remains the same.\n2) See what the improvement is.\nHaving a faster algorithm that misses something is probably going to be worse than having a slower one.\nOnce you have the slow test, you can see if you can do this in parallel and see what the performance increase is.\nThen, you can see if you can mark all nodes that have less than 3 vertices.\nIdeally, you may want to shrink it down to just 100 or so first, so you can draw it, and see what is happening graphically.\nSometimes your brain will see a pattern that isn't as obvious when looking at algorithms.","Q_Score":10,"Tags":"python,graph,geometry,cycle","A_Id":1705866,"CreationDate":"2009-11-10T05:33:00.000","Title":"Finding cycle of 3 nodes ( or triangles) in a graph","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to call a Python function from MATLAB. how can I do this?","AnswerCount":13,"Available Count":1,"Score":0.0461211021,"is_accepted":false,"ViewCount":132733,"Q_Id":1707780,"Users Score":3,"Answer":"Like Daniel said you can run python commands directly from Matlab using the py. command. To run any of the libraries you just have to make sure Malab is running the python environment where you installed the libraries:\nOn a Mac:\n\nOpen a new terminal window;\ntype: which python (to find out where the default version of python is installed);\nRestart Matlab;\ntype: pyversion('\/anaconda2\/bin\/python'), in the command line (obviously replace with your path).\nYou can now run all the libraries in your default python installation. \n\nFor example:\n\n\npy.sys.version;\npy.sklearn.cluster.dbscan","Q_Score":76,"Tags":"python,matlab,language-interoperability","A_Id":50057490,"CreationDate":"2009-11-10T12:57:00.000","Title":"Call Python function from MATLAB","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i 'm just finish labs meeting with my advisor, previous code is written in matlab and it run offline mode not realtime mode, so i decide to convert to python+numpy (in offline version) but after labs meeting, my advisor raise issue about speed of realtime recognition, so i have doubt about speed of python+numpy to do this project. or better in c? my project is about using electronic glove (2x sensors) to get realtime data and do data processing, recognition process","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":914,"Q_Id":1727950,"Users Score":1,"Answer":"I think the answer depends on three things: how well you code in Matlab, how well you code in Python\/Numpy, and your algorithm. Both Matlab and Python can be fast for number crunching if you're diligent about vectorizing everything and using library calls.\nIf your Matlab code is already very good I would be surprised if you saw much performance benefit moving to Numpy unless there's some specific idiom you can use to your advantage. You might not even see a large benefit moving to C. I this case your effort would likely be better spent tuning your algorithm.\nIf your Matlab code isn't so good you could 1) write better Matlab code, 2) rewrite in good Numpy code, or 3) rewrite in C.","Q_Score":2,"Tags":"python,c,numpy,gesture-recognition","A_Id":1730684,"CreationDate":"2009-11-13T08:43:00.000","Title":"Just Curious about Python+Numpy to Realtime Gesture Recognition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am a engineering student and i have to do a lot of numerical processing, plots, simulations etc. The tool that i use currently is Matlab. I use it in my university computers for most of my assignments. However, i want to know what are the free options available. \ni have done some research and many have said that python is a worthy replacement for matlab in various scenarios. i want to know how to do all this with python. i am using a mac so how do i install the different python packages. what are those packages? is it really a viable alternative? what are the things i can and cannot do using this python setup?","AnswerCount":8,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":15393,"Q_Id":1776290,"Users Score":13,"Answer":"I've been programming with Matlab for about 15 years, and with Python for about 10. It usually breaks down this way:\nIf you can satisfy the following conditions:\n 1. You primarily use matrices and matrix operations\n 2. You have the money for a Matlab license\n 3. You work on a platform that mathworks supports\nThen, by all means, use Matlab. Otherwise, if you have data structures other than matrices, want an open-source option that allows you to deliver solutions without worrying about licenses, and need to build on platforms that mathworks does not support; then, go with Python.\nThe matlab language is clunky, but the user interface is slick. The Python language is very nice -- with iterators, generators, and functional programming tools that matlab lacks; however, you will have to pick and choose to put together a nice slick interface if you don't like (or can't use) SAGE.\nI hope that helps.","Q_Score":18,"Tags":"python,matlab","A_Id":1777708,"CreationDate":"2009-11-21T18:24:00.000","Title":"replacing Matlab with python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I do know there are some libraries that allow to use Support vector Machines from python code, but I am looking specifically for libraries that allow one to teach it online (this is, without having to give it all the data at once).\nAre there any?","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5533,"Q_Id":1783669,"Users Score":0,"Answer":"Why would you want to train it online? Adding trainings instances would usually require to re-solve the quadratic programming problem associated with the SVM.\nA way to handle this is to train a SVM in batch mode, and when new data is available, check if these data points are in the [-1, +1] margin of the hyperplane. If so, retrain the SVM using all the old support vectors, and the new training data that falls in the margin.\nOf course, the results can be slightly different compared to batch training on all your data, as some points can be discarded that would be support vectors later on. So again, why do you want to perform online training of you SVM?","Q_Score":10,"Tags":"python,artificial-intelligence,machine-learning,svm","A_Id":1816714,"CreationDate":"2009-11-23T15:03:00.000","Title":"Any python Support Vector Machine library around that allows online learning?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how can I replace the NaN value in an array, zero if an operation is performed such that as a result instead of the NaN value is zero operations as\n0 \/ 0 = NaN can be replaced by 0","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32496,"Q_Id":1803516,"Users Score":0,"Answer":"import numpy\nalpha = numpy.array([1,2,3,numpy.nan,4])\nn = numpy.nan_to_num(alpha)\nprint(n)\noutput : array([1., 2., 3., 0., 4.])","Q_Score":10,"Tags":"python","A_Id":72299692,"CreationDate":"2009-11-26T12:54:00.000","Title":"replace the NaN value zero after an operation with arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python code computing a matrix, and I would like to use this matrix (or array, or list) from C code.\nI wanted to pickle the matrix from the python code, and unpickle it from c code, but I could not find documentation or example on how to do this.\nI found something about marshalling data, but nothing about unpickling from C.\nEdit :\nCommenters Peter H asked if I was working with numpy arrays. The answer is yes.","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1690,"Q_Id":1881851,"Users Score":0,"Answer":"take a look at module struct ?","Q_Score":1,"Tags":"python,c","A_Id":1881867,"CreationDate":"2009-12-10T15:41:00.000","Title":"How to unpickle from C code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large collection of rectangles, all of the same size. I am generating random points that should not fall in these rectangles, so what I wish to do is test if the generated point lies in one of the rectangles, and if it does, generate a new point.\nUsing R-trees seem to work, but they are really meant for rectangles and not points. I could use a modified version of a R-tree algorithm which works with points too, but I'd rather not reinvent the wheel, if there is already some better solution. I'm not very familiar with data-structures, so maybe there already exists some structure that works for my problem?\nIn summary, basically what I'm asking is if anyone knows of a good algorithm, that works in Python, that can be used to check if a point lies in any rectangle in a given set of rectangles.\nedit: This is in 2D and the rectangles are not rotated.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":13398,"Q_Id":1897779,"Users Score":0,"Answer":"Your R-tree approach is the best approach I know of (that's the approach I would choose over quadtrees, B+ trees, or BSP trees, as R-trees seem convenient to build in your case). Caveat: I'm no expert, even though I remember a few things from my senior year university class of algorithmic!","Q_Score":11,"Tags":"python,algorithm,point","A_Id":1897910,"CreationDate":"2009-12-13T21:17:00.000","Title":"Test if point is in some rectangle","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large collection of rectangles, all of the same size. I am generating random points that should not fall in these rectangles, so what I wish to do is test if the generated point lies in one of the rectangles, and if it does, generate a new point.\nUsing R-trees seem to work, but they are really meant for rectangles and not points. I could use a modified version of a R-tree algorithm which works with points too, but I'd rather not reinvent the wheel, if there is already some better solution. I'm not very familiar with data-structures, so maybe there already exists some structure that works for my problem?\nIn summary, basically what I'm asking is if anyone knows of a good algorithm, that works in Python, that can be used to check if a point lies in any rectangle in a given set of rectangles.\nedit: This is in 2D and the rectangles are not rotated.","AnswerCount":5,"Available Count":2,"Score":0.1194272985,"is_accepted":false,"ViewCount":13398,"Q_Id":1897779,"Users Score":3,"Answer":"For rectangles that are aligned with the axes, you only need two points (four numbers) to identify the rectangle - conventionally, bottom-left and top-right corners. To establish whether a given point (Xtest, Ytest) overlaps with a rectangle (XBL, YBL, XTR, YTR) by testing both:\n\nXtest >= XBL && Xtest <= XTR\nYtest >= YBL && Ytest <= YTR\n\nClearly, for a large enough set of points to test, this could be fairly time consuming. The question, then, is how to optimize the testing.\nClearly, one optimization is to establish the minimum and maximum X and Y values for the box surrounding all the rectangles (the bounding box): a swift test on this shows whether there is any need to look further.\n\nXtest >= Xmin && Xtest <= Xmax\nYtest >= Ymin && Ytest <= Ymax\n\nDepending on how much of the total surface area is covered with rectangles, you might be able to find non-overlapping sub-areas that contain rectangles, and you could then avoid searching those sub-areas that cannot contain a rectangle overlapping the point, again saving comparisons during the search at the cost of pre-computation of suitable data structures. If the set of rectangles is sparse enough, there may be no overlapping, in which case this degenerates into the brute-force search. Equally, if the set of rectangles is so dense that there are no sub-ranges in the bounding box that can be split up without breaking rectangles.\nHowever, you could also arbitrarily break up the bounding area into, say, quarters (half in each direction). You would then use a list of boxes which would include more boxes than in the original set (two or four boxes for each box that overlapped one of the arbitrary boundaries). The advantage of this is that you could then eliminate three of the four quarters from the search, reducing the amount of searching to be done in total - at the expense of auxilliary storage.\nSo, there are space-time trade-offs, as ever. And pre-computation versus search trade-offs. If you are unlucky, the pre-computation achieves nothing (for example, there are two boxes only, and they don't overlap on either axis). On the other hand, it could achieve considerable search-time benefit.","Q_Score":11,"Tags":"python,algorithm,point","A_Id":1897962,"CreationDate":"2009-12-13T21:17:00.000","Title":"Test if point is in some rectangle","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hello I have a 1000 data series with 1500 points in each. \nThey form a (1000x1500) size Numpy array created using np.zeros((1500, 1000)) and then filled with the data. \nNow what if I want the array to grow to say 1600 x 1100? Do I have to add arrays using hstack and vstack or is there a better way?\nI would want the data already in the 1000x1500 piece of the array not to be changed, only blank data (zeros) added to the bottom and right, basically.\nThanks.","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":17603,"Q_Id":1909994,"Users Score":2,"Answer":"No matter what, you'll be stuck reallocating a chunk of memory, so it doesn't really matter if you use arr.resize(), np.concatenate, hstack\/vstack, etc. Note that if you're accumulating a lot of data sequentially, Python lists are usually more efficient.","Q_Score":9,"Tags":"python,arrays,numpy,reshape","A_Id":1916520,"CreationDate":"2009-12-15T20:02:00.000","Title":"How do I add rows and columns to a NUMPY array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hello I have a 1000 data series with 1500 points in each. \nThey form a (1000x1500) size Numpy array created using np.zeros((1500, 1000)) and then filled with the data. \nNow what if I want the array to grow to say 1600 x 1100? Do I have to add arrays using hstack and vstack or is there a better way?\nI would want the data already in the 1000x1500 piece of the array not to be changed, only blank data (zeros) added to the bottom and right, basically.\nThanks.","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":17603,"Q_Id":1909994,"Users Score":3,"Answer":"If you want zeroes in the added elements, my_array.resize((1600, 1000)) should work. Note that this differs from numpy.resize(my_array, (1600, 1000)), in which previous lines are duplicated, which is probably not what you want.\nOtherwise (for instance if you want to avoid initializing elements to zero, which could be unnecessary), you can indeed use hstack and vstack to add an array containing the new elements; numpy.concatenate() (see pydoc numpy.concatenate) should work too (it is just more general, as far as I understand).\nIn either case, I would guess that a new memory block has to be allocated in order to extend the array, and that all these methods take about the same time.","Q_Score":9,"Tags":"python,arrays,numpy,reshape","A_Id":1910401,"CreationDate":"2009-12-15T20:02:00.000","Title":"How do I add rows and columns to a NUMPY array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know python offers random module to do some simple lottery. Let say random.shuffle() is a good one.\nHowever, I want to build my own simple one. What should I look into? Is there any specific mathematical philosophies behind lottery?\nLet say, the simplest situation. 100 names and generate 20 names randomly.\nI don't want to use shuffle, since I want to learn to build one myself.\nI need some advise to start. Thanks.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1639,"Q_Id":1950539,"Users Score":0,"Answer":"The main shortcoming of software-based methods of generating lottery numbers is the fact that all random numbers generated by software are pseudo-random.\nThis may not be a problem for your simple application, but you did ask about a 'specific mathematical philosophy'. You will have noticed that all commercial lottery systems use physical methods: balls with numbers.\nAnd behind the scenes, the numbers generated by physical lottery systems will be carefully scrutunised for indications of non-randomness and steps taken to eliminate it.\nAs I say, this may not be a consideration for your simple application, but the overriding requirement of a true lottery (the 'specific mathematical philosophy') should be mathematically demonstrable randomness","Q_Score":0,"Tags":"python","A_Id":1950575,"CreationDate":"2009-12-23T03:30:00.000","Title":"python lottery suggestion","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 3D space (x, y, z) with an additional parameter at each point (energy), giving 4 dimensions of data in total.\nI would like to find a set of x, y, z points which correspond to an iso-energy surface found by interpolating between the known points.\nThe spacial mesh has constant spacing and surrounds the iso-energy surface entirely, however, it does not occupy a cubic space (the mesh occupies a roughly cylindrical space)\nSpeed is not crucial, I can leave this number crunching for a while. Although I'm coding in Python and NumPy, I can write portions of the code in FORTRAN. I can also wrap existing C\/C++\/FORTRAN libraries for use in the scripts, if such libraries exist.\nAll examples and algorithms that I have so far found online (and in Numerical Recipes) stop short of 4D data.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":5443,"Q_Id":1972172,"Users Score":1,"Answer":"Why not try quadlinear interpolation?\nextend Trilinear interpolation by another dimension. As long as a linear interpolation model fits your data, it should work.","Q_Score":8,"Tags":"python,algorithm,interpolation","A_Id":1972198,"CreationDate":"2009-12-28T23:46:00.000","Title":"Interpolating a scalar field in a 3D space","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 3D space (x, y, z) with an additional parameter at each point (energy), giving 4 dimensions of data in total.\nI would like to find a set of x, y, z points which correspond to an iso-energy surface found by interpolating between the known points.\nThe spacial mesh has constant spacing and surrounds the iso-energy surface entirely, however, it does not occupy a cubic space (the mesh occupies a roughly cylindrical space)\nSpeed is not crucial, I can leave this number crunching for a while. Although I'm coding in Python and NumPy, I can write portions of the code in FORTRAN. I can also wrap existing C\/C++\/FORTRAN libraries for use in the scripts, if such libraries exist.\nAll examples and algorithms that I have so far found online (and in Numerical Recipes) stop short of 4D data.","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":5443,"Q_Id":1972172,"Users Score":2,"Answer":"Since you have a spatial mesh with constant spacing, you can identify all neighbors on opposite sides of the isosurface. Choose some form of interpolation (q.v. Reed Copsey's answer) and do root-finding along the line between each such neighbor.","Q_Score":8,"Tags":"python,algorithm,interpolation","A_Id":1973347,"CreationDate":"2009-12-28T23:46:00.000","Title":"Interpolating a scalar field in a 3D space","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed Matplotlib, and I have created two lists, x and y.\nI want the x-axis to have values from 0 to 100 in steps of 10 and the y-axis to have values from 0 to 1 in steps of 0.1. How do I plot this graph?","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":5478,"Q_Id":2048041,"Users Score":2,"Answer":"There is a very good book:\nSandro Tosi, Matplotlib for Python Developers, Packt Pub., 2009.","Q_Score":6,"Tags":"python,matplotlib","A_Id":10303500,"CreationDate":"2010-01-12T10:02:00.000","Title":"How do I plot a graph in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available \/ searchable. (Some options include RDBMS, NoSQL stuff, using the grep\/awk and friends, etc.). \nProposal\nIn particular, one idea that appeals to me is \"indexing\" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this. \nI realize this is a little ill-formed, and solutions are welcome.\nAdditional Details\n\nfiles long, not wide\n\n\nmillions of lines per hour, spread over 100 files per hour\ntab seperated, not many columns (~10) \nfields are short (say < 50 chars per field)\n\nqueries are on fields, combinations of fields, and can be historical\n\nDrawbacks to various solutions:\n(All of these are based on my observations and tests, but I'm open to correction)\nBDB\n\nhas problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)\nsingle writer (if it's possible to get around this, I want to see code!)\nhard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over). \nsince it only stores strings, there is a serialize \/ deserialize step\n\nRDBMSes\nWins:\n\nflat table model is excellent for querying, indexing\n\nLosses:\n\nIn my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":2801,"Q_Id":2110843,"Users Score":1,"Answer":"If the data is already organized in fields, it doesn't sound like a text searching\/indexing problem. It sounds like tabular data that would be well-served by a database.\nScript the file data into a database, index as you see fit, and query the data in any complex way the database supports.\nThat is unless you're looking for a cool learning project. Then, by all means, come up with an interesting file indexing scheme.","Q_Score":4,"Tags":"python,algorithm,indexing,binary-tree","A_Id":2111067,"CreationDate":"2010-01-21T16:22:00.000","Title":"File indexing (using Binary trees?) in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available \/ searchable. (Some options include RDBMS, NoSQL stuff, using the grep\/awk and friends, etc.). \nProposal\nIn particular, one idea that appeals to me is \"indexing\" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this. \nI realize this is a little ill-formed, and solutions are welcome.\nAdditional Details\n\nfiles long, not wide\n\n\nmillions of lines per hour, spread over 100 files per hour\ntab seperated, not many columns (~10) \nfields are short (say < 50 chars per field)\n\nqueries are on fields, combinations of fields, and can be historical\n\nDrawbacks to various solutions:\n(All of these are based on my observations and tests, but I'm open to correction)\nBDB\n\nhas problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)\nsingle writer (if it's possible to get around this, I want to see code!)\nhard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over). \nsince it only stores strings, there is a serialize \/ deserialize step\n\nRDBMSes\nWins:\n\nflat table model is excellent for querying, indexing\n\nLosses:\n\nIn my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":2801,"Q_Id":2110843,"Users Score":1,"Answer":"The physical storage access time will tend to dominate anything you do. When you profile, you'll find that the read() is where you spend most of your time.\nTo reduce the time spent waiting for I\/O, your best bet is compression.\nCreate a huge ZIP archive of all of your files. One open, fewer reads. You'll spend more CPU time. I\/O time, however, will dominate your processing, so reduce I\/O time by zipping everything.","Q_Score":4,"Tags":"python,algorithm,indexing,binary-tree","A_Id":2110912,"CreationDate":"2010-01-21T16:22:00.000","Title":"File indexing (using Binary trees?) in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available \/ searchable. (Some options include RDBMS, NoSQL stuff, using the grep\/awk and friends, etc.). \nProposal\nIn particular, one idea that appeals to me is \"indexing\" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this. \nI realize this is a little ill-formed, and solutions are welcome.\nAdditional Details\n\nfiles long, not wide\n\n\nmillions of lines per hour, spread over 100 files per hour\ntab seperated, not many columns (~10) \nfields are short (say < 50 chars per field)\n\nqueries are on fields, combinations of fields, and can be historical\n\nDrawbacks to various solutions:\n(All of these are based on my observations and tests, but I'm open to correction)\nBDB\n\nhas problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)\nsingle writer (if it's possible to get around this, I want to see code!)\nhard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over). \nsince it only stores strings, there is a serialize \/ deserialize step\n\nRDBMSes\nWins:\n\nflat table model is excellent for querying, indexing\n\nLosses:\n\nIn my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":2801,"Q_Id":2110843,"Users Score":1,"Answer":"sqlite3 is fast, small, part of python (so nothing to install) and provides indexing of columns. It writes to files, so you wouldn't need to install a database system.","Q_Score":4,"Tags":"python,algorithm,indexing,binary-tree","A_Id":12805622,"CreationDate":"2010-01-21T16:22:00.000","Title":"File indexing (using Binary trees?) in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have an array of 27 elements,and i don't want to generate all permutations of array (27!)\ni need 5000 randomly choosed permutations,any tip will be useful...","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":38205,"Q_Id":2124347,"Users Score":1,"Answer":"You may want the itertools.permutations() function. Gotta love that itertools module!\nNOTE: New in 2.6","Q_Score":30,"Tags":"python,permutation","A_Id":2124356,"CreationDate":"2010-01-23T19:12:00.000","Title":"how to generate permutations of array in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had to build a concept analyzer for computer science field and I used for this machine learning, the orange library for Python. I have the examples of concepts, where the features are lemma and part of speech, like algorithm|NN|concept. The problem is that any other word, that in fact is not a concept, is classified as a concept, due to the lack of negative examples. It is not feasable to put all the other words in learning file, classified as simple words not concepts(this will work, but is not quite a solution). Any idea?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":296,"Q_Id":2126383,"Users Score":2,"Answer":"The question is very unclear, but assuming what you mean is that your machine learning algorithm is not working without negative examples and you can't give it every possible negative example, then it's perfectly alright to give it some negative examples.\nThe point of data mining (a.k.a. machine learning) is to try coming up with general rules based on a relatively small samples of data and then applying them to larger data. In real life problems you will never have all the data. If you had all possible inputs, you could easily create a simple sequence of if-then rules which would always be correct. If it was that simple, robots would be doing all our thinking for us by now.","Q_Score":2,"Tags":"python,artificial-intelligence,machine-learning,data-mining","A_Id":2126656,"CreationDate":"2010-01-24T08:07:00.000","Title":"Machine learning issue for negative instances","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a camera that will be stationary, pointed at an indoors area. People will walk past the camera, within about 5 meters of it. Using OpenCV, I want to detect individuals walking past - my ideal return is an array of detected individuals, with bounding rectangles.\nI've looked at several of the built-in samples:\n\nNone of the Python samples really apply\nThe C blob tracking sample looks promising, but doesn't accept live video, which makes testing difficult. It's also the most complicated of the samples, making extracting the relevant knowledge and converting it to the Python API problematic.\nThe C 'motempl' sample also looks promising, in that it calculates a silhouette from subsequent video frames. Presumably I could then use that to find strongly connected components and extract individual blobs and their bounding boxes - but I'm still left trying to figure out a way to identify blobs found in subsequent frames as the same blob.\n\nIs anyone able to provide guidance or samples for doing this - preferably in Python?","AnswerCount":4,"Available Count":2,"Score":0.2449186624,"is_accepted":false,"ViewCount":45934,"Q_Id":2188646,"Users Score":5,"Answer":"Nick,\nWhat you are looking for is not people detection, but motion detection. If you tell us a lot more about what you are trying to solve\/do, we can answer better. \nAnyway, there are many ways to do motion detection depending on what you are going to do with the results. Simplest one would be differencing followed by thresholding while a complex one could be proper background modeling -> foreground subtraction -> morphological ops -> connected component analysis, followed by blob analysis if required. Download the opencv code and look in samples directory. You might see what you are looking for. Also, there is an Oreilly book on OCV.\nHope this helps,\nNand","Q_Score":37,"Tags":"python,opencv,computer-vision,motion-detection","A_Id":2352019,"CreationDate":"2010-02-02T23:50:00.000","Title":"How can I detect and track people using OpenCV?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a camera that will be stationary, pointed at an indoors area. People will walk past the camera, within about 5 meters of it. Using OpenCV, I want to detect individuals walking past - my ideal return is an array of detected individuals, with bounding rectangles.\nI've looked at several of the built-in samples:\n\nNone of the Python samples really apply\nThe C blob tracking sample looks promising, but doesn't accept live video, which makes testing difficult. It's also the most complicated of the samples, making extracting the relevant knowledge and converting it to the Python API problematic.\nThe C 'motempl' sample also looks promising, in that it calculates a silhouette from subsequent video frames. Presumably I could then use that to find strongly connected components and extract individual blobs and their bounding boxes - but I'm still left trying to figure out a way to identify blobs found in subsequent frames as the same blob.\n\nIs anyone able to provide guidance or samples for doing this - preferably in Python?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":45934,"Q_Id":2188646,"Users Score":2,"Answer":"This is similar to a project we did as part of a Computer Vision course, and I can tell you right now that it is a hard problem to get right.\nYou could use foreground\/background segmentation, find all blobs and then decide that they are a person. The problem is that it will not work very well since people tend to go together, go past each other and so on, so a blob might very well consist of two persons and then you will see that blob splitting and merging as they walk along.\nYou will need some method of discriminating between multiple persons in one blob. This is not a problem I expect anyone being able to answer in a single SO-post.\nMy advice is to dive into the available research and see if you can find anything there. The problem is not unsolvavble considering that there exists products which do this: Autoliv has a product to detect pedestrians using an IR-camera on a car, and I have seen other products which deal with counting customers entering and exiting stores.","Q_Score":37,"Tags":"python,opencv,computer-vision,motion-detection","A_Id":2190799,"CreationDate":"2010-02-02T23:50:00.000","Title":"How can I detect and track people using OpenCV?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I cannot get the example Python programs to run. When executing the Python command \"from opencv import cv\" I get the message \"ImportError: No module named _cv\". There is a stale _cv.pyd in the site-packages directory, but no _cv.py anywhere. See step 5 below.\nMS Windows XP, VC++ 2008, Python 2.6, OpenCV 2.0\nHere's what I have done.\n\nDownloaded and ran the MS Windows installer for OpenCV2.0.\nDownloaded and installed CMake\nDownloaded and installed SWIG\nRan CMake. After unchecking \"ENABLE_OPENMP\" in the CMake GUI, I was able to build OpenCV using INSTALL.vcproj and BUILD_ALL.vcproj. I do not know what the difference is, so I built everything under both of those project files. The C example programs run fine.\nCopied contents of OpenCV2.0\/Python2.6\/lib\/site-packages to my installed Python2.6\/lib\/site-packages directory. I notice that it contains an old _cv.pyd and an old libcv.dll.a.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":16728,"Q_Id":2195441,"Users Score":3,"Answer":"After Step 1 (Installer) just copy the content of C:\\OpenCV2.0\\Python2.6\\Lib\\site-packages to C:\\Python26\\Lib\\site-packages (standard installation path assumed).\nThat's all.\nIf you have a webcam installed you can try the camshift.demo in C:\\OpenCV2.0\\samples\\python\nThe deprecated stuff (C:\\OpenCV2.0\\samples\\swig_python) does not work at the moment as somebody wrote above. The OpenCV People are working on it. Here is the full picture:\n\n31\/03\/10 (hopefully) Next OpenCV Official Release: 2.1.0 is due March\n31st, 2010.\nlink:\/\/opencv.willowgarage.com\/wiki\/Welcome\/Introduction#Announcements\n04\/03\/10 [james]rewriting samples for new Python 5:36 PM Mar 4th\nvia API link:\/\/twitter.com\/opencvlibrary\n12\/31\/09 We've gotten more serious about OpenCV's software\nengineering. We now have a full C++ and Python interface.\nlink:\/\/opencv.willowgarage.com\/wiki\/OpenCV%20Monthly\n9\/30\/09 Several (actually, most) SWIG-based Python samples do not work\ncorrectly now. The reason is this problem is being investigated and\nthe intermediate update of the OpenCV Python package will be released\nas soon as the problem is sorted out.\nlink:\/\/opencv.willowgarage.com\/wiki\/OpenCV%20Monthly","Q_Score":6,"Tags":"python,opencv","A_Id":2405587,"CreationDate":"2010-02-03T21:07:00.000","Title":"OpenCV 2.0 and Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running some calculations in an external machine and at the end I get X, Y pairs. I want to apply linear regression and obtain A, B, and R2. In this machine I can not install anything (it runs Linux) and has basic stuff installed on it, python, bash (of course), etc.\nI wonder what would be the best approach to use a script (python, bash, etc) or program (I can compile C and C++) that gives me the linear regression coefficients without the need to add external libraries (numpy, etc)","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":2428,"Q_Id":2204087,"Users Score":1,"Answer":"How about extracting the coeffs into a file, import to another machine and then use Excel\/Matlab\/whatever other program that does this for you?","Q_Score":2,"Tags":"c++,python,bash,linear-algebra","A_Id":2204122,"CreationDate":"2010-02-04T23:45:00.000","Title":"Minimal linear regression program","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running some calculations in an external machine and at the end I get X, Y pairs. I want to apply linear regression and obtain A, B, and R2. In this machine I can not install anything (it runs Linux) and has basic stuff installed on it, python, bash (of course), etc.\nI wonder what would be the best approach to use a script (python, bash, etc) or program (I can compile C and C++) that gives me the linear regression coefficients without the need to add external libraries (numpy, etc)","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":2428,"Q_Id":2204087,"Users Score":3,"Answer":"For a single, simple, known function (as in your case: a line) it is not hard to simply code a basic least square routine from scratch (but does require some attention to detail). It is a very common assignment in introductory numeric analysis classes.\nSo, look up least squares on wikipedia or mathworld or in a text book and go to town.","Q_Score":2,"Tags":"c++,python,bash,linear-algebra","A_Id":2204124,"CreationDate":"2010-02-04T23:45:00.000","Title":"Minimal linear regression program","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there any way to turn off antialias for all text in a plot, especially the ticklabels?","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2659,"Q_Id":2219503,"Users Score":1,"Answer":"It seems this is not possible. Some classes such as Line2D have a \"set_antialiased\" method, but Text lacks this. I suggest you file a feature request on the Sourceforge tracker, and send an email to the matplotlib mailing list mentioning the request.","Q_Score":5,"Tags":"python,matplotlib,antialiasing","A_Id":2223165,"CreationDate":"2010-02-08T04:01:00.000","Title":"Matplotlib turn off antialias for text in plot?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing an ongoing survey, every quarter. We get people to sign up (where they give extensive demographic info). \nThen we get them to answer six short questions with 5 possible values much worse, worse, same, better, much better. \nOf course over time we will not get the same participants,, some will drop out and some new ones will sign up,, so I'm trying to decide how to best build a db and code (hope to use Python, Numpy?) to best allow for ongoing collection and analysis by the various categories defined by the initial demographic data..As of now we have 700 or so participants, so the dataset is not too big.\nI.E.;\ndemographic, UID, North, south, residential. commercial Then answer for 6 questions for Q1\nSame for Q2 and so on,, then need able to slice dice and average the values for the quarterly answers by the various demographics to see trends over time. \nThe averaging, grouping and so forth is modestly complicated by having differing participants each quarter\nAny pointers to design patterns for this sort of DB? and analysis? Is this a sparse matrix?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1157,"Q_Id":2223576,"Users Score":0,"Answer":"On the analysis, if your six questions have been posed in a way that would lead you to believe the answers will be correlated, consider conducting a factor analysis on the raw scores first. Often comparing the factors across regions or customer type has more statistical power than comparing across questions alone. Also, the factor scores are more likely to be normally distributed (they are the weighted sum of 6 observations) while the six questions alone would not. This allows you to apply t-tests based on the normal distibution when comparing factor scores. \nOne watchout, though. If you assign numeric values to answers - 1 = much worse, 2 = worse, etc. you are implying that the distance between much worse and worse is the same as the distance between worse and same. This is generally not true - you might really have to screw up to get a vote of \"much worse\" while just being a passive screw up might get you a \"worse\" score. So the assignment of cardinal (numerics) to ordinal (ordering) has a bias of its own. \nThe unequal number of participants per quarter isn't a problem - there are statistical t-tests that deal with unequal sample sizes.","Q_Score":1,"Tags":"python,design-patterns,statistics,matrix,survey","A_Id":2224074,"CreationDate":"2010-02-08T17:38:00.000","Title":"Design pattern for ongoing survey anayisis","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a project where I need to store a matrix of numbers indexed by two string keys. The matrix is not jagged, i.e. if a column key exists for any row then it should exist for all rows. Similarly, if a row key exists for any column then it should exist for all columns.\nThe obvious way to express this is with an associative array of associative arrays, but this is both awkward and inefficient, and it doesn't enforce the non-jaggedness property. Do any popular programming languages provide an associative matrix either built into the language or as part of their standard libraries? If so, how do they work, both at the API and implementation level? I'm using Python and D for this project, but examples in other languages would still be useful because I would be able to look at the API and figure out the best way to implement something similar in Python or D.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":566,"Q_Id":2247197,"Users Score":2,"Answer":"Why not just use a standard matrix, but then have two dictionaries - one that converts the row keys to row indices and one that converts the columns keys to columns indices. You could make your own structure that would work this way fairly easily I think. You just make a class that contains the matrix and the two dictionaries and go from there.","Q_Score":6,"Tags":"python,data-structures,matrix,d,associative-array","A_Id":2247284,"CreationDate":"2010-02-11T19:39:00.000","Title":"Associative Matrices?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm reading a 6 million entry .csv file with Python, and I want to be able to search through this file for a particular entry.\nAre there any tricks to search the entire file? Should you read the whole thing into a dictionary or should you perform a search every time? I tried loading it into a dictionary but that took ages so I'm currently searching through the whole file every time which seems wasteful.\nCould I possibly utilize that the list is alphabetically ordered? (e.g. if the search word starts with \"b\" I only search from the line that includes the first word beginning with \"b\" to the line that includes the last word beginning with \"b\")\nI'm using import csv.\n(a side question: it is possible to make csv go to a specific line in the file? I want to make the program start at a random line)\nEdit: I already have a copy of the list as an .sql file as well, how could I implement that into Python?","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":20439,"Q_Id":2299454,"Users Score":0,"Answer":"my idea is to use python zodb module to store dictionaty type data and then create new csv file using that data structure. do all your operation at that time.","Q_Score":4,"Tags":"python,dictionary,csv,large-files","A_Id":3279041,"CreationDate":"2010-02-19T20:51:00.000","Title":"How do quickly search through a .csv file in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm reading a 6 million entry .csv file with Python, and I want to be able to search through this file for a particular entry.\nAre there any tricks to search the entire file? Should you read the whole thing into a dictionary or should you perform a search every time? I tried loading it into a dictionary but that took ages so I'm currently searching through the whole file every time which seems wasteful.\nCould I possibly utilize that the list is alphabetically ordered? (e.g. if the search word starts with \"b\" I only search from the line that includes the first word beginning with \"b\" to the line that includes the last word beginning with \"b\")\nI'm using import csv.\n(a side question: it is possible to make csv go to a specific line in the file? I want to make the program start at a random line)\nEdit: I already have a copy of the list as an .sql file as well, how could I implement that into Python?","AnswerCount":6,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":20439,"Q_Id":2299454,"Users Score":1,"Answer":"You can't go directly to a specific line in the file because lines are variable-length, so the only way to know when line #n starts is to search for the first n newlines. And it's not enough to just look for '\\n' characters because CSV allows newlines in table cells, so you really do have to parse the file anyway.","Q_Score":4,"Tags":"python,dictionary,csv,large-files","A_Id":2443606,"CreationDate":"2010-02-19T20:51:00.000","Title":"How do quickly search through a .csv file in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a range of data that I have approximated using a polynomial of degree 2 in Python. I want to calculate the area underneath this polynomial between 0 and 1.\nIs there a calculus, or similar package from numpy that I can use, or should I just make a simple function to integrate these functions?\nI'm a little unclear what the best approach for defining mathematical functions is.\nThanks.","AnswerCount":5,"Available Count":1,"Score":0.1194272985,"is_accepted":false,"ViewCount":4045,"Q_Id":2352499,"Users Score":3,"Answer":"It might be overkill to resort to general-purpose numeric integration algorithms for your special case...if you work out the algebra, there's a simple expression that gives you the area.\nYou have a polynomial of degree 2: f(x) = ax2 + bx + c\nYou want to find the area under the curve for x in the range [0,1].\nThe antiderivative F(x) = ax3\/3 + bx2\/2 + cx + C\nThe area under the curve from 0 to 1 is: F(1) - F(0) = a\/3 + b\/2 + c \nSo if you're only calculating the area for the interval [0,1], you might consider\nusing this simple expression rather than resorting to the general-purpose methods.","Q_Score":4,"Tags":"python,polynomial-math,numerical-integration","A_Id":2352875,"CreationDate":"2010-02-28T20:18:00.000","Title":"Calculating the area underneath a mathematical function","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to solve a Poisson equation on a rectangular domain which ends up being a linear problem like\n Ax=b\nbut since I know the boundary conditions, there are nodes where I have the solution values. I guess my question is...\n How can I solve the sparse system Ax=b if I know what some of the coordinates of x are and the undetermined values depend on these as well? It's the same as a normal solve except I know some of the solution to begin with.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":533,"Q_Id":2361176,"Users Score":1,"Answer":"If I understand correctly, some elements of x are known, and some are not, and you want to solve Ax = b for the unknown values of x, correct?\nLet Ax = [A1 A2][x1; x2] = b, where the vector x = [x1; x2], the vector x1 has the unknown values of x, and vector x2 have the known values of x. Then, A1x1 = b - A2x2. Therefore, solve for x1 using scipy.linalg.solve or any other desired solver.","Q_Score":1,"Tags":"python,numpy,sparse-matrix,poisson","A_Id":2361204,"CreationDate":"2010-03-02T05:57:00.000","Title":"Solving Sparse Linear Problem With Some Known Boundary Values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a set of X,Y data points (about 10k) that are easy to plot as a scatter plot but that I would like to represent as a heatmap.\nI looked through the examples in MatPlotLib and they all seem to already start with heatmap cell values to generate the image.\nIs there a method that converts a bunch of x,y, all different, to a heatmap (where zones with higher frequency of x,y would be \"warmer\")?","AnswerCount":12,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":284427,"Q_Id":2369492,"Users Score":2,"Answer":"Make a 2-dimensional array that corresponds to the cells in your final image, called say heatmap_cells and instantiate it as all zeroes.\nChoose two scaling factors that define the difference between each array element in real units, for each dimension, say x_scale and y_scale. Choose these such that all your datapoints will fall within the bounds of the heatmap array.\nFor each raw datapoint with x_value and y_value:\nheatmap_cells[floor(x_value\/x_scale),floor(y_value\/y_scale)]+=1","Q_Score":222,"Tags":"python,matplotlib,heatmap,histogram2d","A_Id":2371227,"CreationDate":"2010-03-03T07:42:00.000","Title":"Generate a heatmap in MatPlotLib using a scatter data set","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I process a lot of text\/data that I exchange between Python, R, and sometimes Matlab.\nMy go-to is the flat text file, but also use SQLite occasionally to store the data and access from each program (not Matlab yet though). I don't use GROUPBY, AVG, etc. in SQL as much as I do these operations in R, so I don't necessarily require the database operations.\nFor such applications that requires exchanging data among programs to make use of available libraries in each language, is there a good rule of thumb on which data exchange format\/method to use (even XML or NetCDF or HDF5)?\nI know between Python -> R there is rpy or rpy2 but I was wondering about this question in a more general sense - I use many computers which all don't have rpy2 and also use a few other pieces of scientific analysis software that require access to the data at various times (the stages of processing and analysis are also separated).","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3563,"Q_Id":2392017,"Users Score":15,"Answer":"If all the languages support SQLite - use it. The power of SQL might not be useful to you right now, but it probably will be at some point, and it saves you having to rewrite things later when you decide you want to be able to query your data in more complicated ways.\nSQLite will also probably be substantially faster if you only want to access certain bits of data in your datastore - since doing that with a flat-text file is challenging without reading the whole file in (though it's not impossible).","Q_Score":8,"Tags":"python,sql,database,r,file-format","A_Id":2392026,"CreationDate":"2010-03-06T09:30:00.000","Title":"SQLite or flat text file?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing some prototyping with OpenCV for a hobby project involving processing of real time camera data. I wonder if it is worth the effort to reimplement this in C or C++ when I have it all figured out or if no significant performance boost can be expected. The program basically chains OpenCV functions, so the main part of the work should be done in native code anyway.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1352,"Q_Id":2432792,"Users Score":5,"Answer":"You've answered your own question pretty well. Most of the expensive computations should be within the OpenCV library, and thus independent of the language you use. \nIf you're really concerned about efficiency, you could profile your code and confirm that this is indeed the case. If need be, your custom processing functions, if any, could be coded in C\/C++ and exposed in python through the method of your choice (eg: boost-python), to follow the same approach.\nBut in my experience, python works just fine as a \"composition\" tool for such a use.","Q_Score":1,"Tags":"c++,python,c,performance,opencv","A_Id":2433626,"CreationDate":"2010-03-12T12:54:00.000","Title":"OpenCV performance in different languages","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing some prototyping with OpenCV for a hobby project involving processing of real time camera data. I wonder if it is worth the effort to reimplement this in C or C++ when I have it all figured out or if no significant performance boost can be expected. The program basically chains OpenCV functions, so the main part of the work should be done in native code anyway.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1352,"Q_Id":2432792,"Users Score":0,"Answer":"OpenCV used to utilize IPP, which is very fast. However, OpenCV 2.0 does not. You might customize your OpenCV using IPP, for example color conversion routines.","Q_Score":1,"Tags":"c++,python,c,performance,opencv","A_Id":2470491,"CreationDate":"2010-03-12T12:54:00.000","Title":"OpenCV performance in different languages","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am interested to perform kmeans clustering on a list of words with the distance measure being Leveshtein. \n1) I know there are a lot of frameworks out there, including scipy and orange that has a kmeans implementation. However they all require some sort of vector as the data which doesn't really fit me.\n2) I need a good clustering implementation. I looked at python-clustering and realize that it doesn't a) return the sum of all the distance to each centroid, and b) it doesn't have any sort of iteration limit or cut off which ensures the quality of the clustering. python-clustering and the clustering algorithm on daniweb doesn't really work for me.\nCan someone find me a good lib? Google hasn't been my friend","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1947,"Q_Id":2459739,"Users Score":1,"Answer":"Yeah I think there isn't a good implementation to what I need. \nI have some crazy requirements, like distance caching etc.\nSo i think i will just write my own lib and release it as GPLv3 soon.","Q_Score":9,"Tags":"python,cluster-analysis","A_Id":2460285,"CreationDate":"2010-03-17T03:29:00.000","Title":"Python KMeans clustering words","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any good open-source implementation of Mersenne Twister and other good random number generators in Python available? I would like to use in for teaching math and comp sci majors? I am also looking for the corresponding theoretical support. \nEdit: Source code of Mersenne Twister is readily available in various languages such as C (random.py) or pseudocode (Wikipedia) but I could not find one in Python.","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":5857,"Q_Id":2469031,"Users Score":7,"Answer":"Mersenne Twister is an implementation that is used by standard python library. You can see it in random.py file in your python distribution.\nOn my system (Ubuntu 9.10) it is in \/usr\/lib\/python2.6, on Windows it should be in C:\\Python26\\Lib","Q_Score":4,"Tags":"python,random,open-source,mersenne-twister","A_Id":2469142,"CreationDate":"2010-03-18T10:31:00.000","Title":"Open-source implementation of Mersenne Twister in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My question is rather complicated for me to explain, as i'm not really good at maths, but i'll try to be as clear as possible.\nI'm trying to code a cluster in python, which will generate words given a charset (i.e. with lowercase: aaaa, aaab, aaac, ..., zzzz) and make various operations on them. \nI'm searching how to calculate, given the charset and the number of nodes, what range each node should work on (i.e.: node1: aaaa-azzz, node2: baaa-czzz, node3: daaa-ezzz, ...). Is it possible to make an algorithm that could compute this, and if it is, how could i implement this in python?\nI really don't know how to do that, so any help would be much appreciated","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":260,"Q_Id":2488670,"Users Score":1,"Answer":"You should be able to treat your words as numerals in a strange base. For example, let's say you have a..z as your charset (26 characters), 4 character strings, and you want to distribute among equally 10 machines. Then there are a total of 26^4 strings, so each machine gets 26^4\/10 strings. The first machine will get strings 0 through 26^4\/10, the next 26^4\/10 through 26^4\/5, etc.\nTo convert the numbers to strings, just write the number in base 26 using your charset as the numbers. So 0 is 'aaaa' and 26^4\/10 = 2*26^3 + 15*26^2 + 15*26 +15 is 'cppp'.","Q_Score":1,"Tags":"python,algorithm,cluster-analysis,character-encoding","A_Id":2489898,"CreationDate":"2010-03-21T20:55:00.000","Title":"python parallel computing: split keyspace to give each node a range to work on","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using matplotlib for a graphing application. I am trying to create a graph which has strings as the X values. However, the using plot function expects a numeric value for X.\nHow can I use string X values?","AnswerCount":5,"Available Count":1,"Score":0.1586485043,"is_accepted":false,"ViewCount":48768,"Q_Id":2497449,"Users Score":4,"Answer":"Why not just make the x value some auto-incrementing number and then change the label?\n--jed","Q_Score":11,"Tags":"python,matplotlib","A_Id":2497667,"CreationDate":"2010-03-23T03:47:00.000","Title":"Plot string values in matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Guys, I here have 200 separate csv files named from SH (1) to SH (200). I want to merge them into a single csv file. How can I do it?","AnswerCount":22,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":189456,"Q_Id":2512386,"Users Score":78,"Answer":"Why can't you just sed 1d sh*.csv > merged.csv?\nSometimes you don't even have to use python!","Q_Score":96,"Tags":"python,csv,merge,concatenation","A_Id":5876058,"CreationDate":"2010-03-25T00:24:00.000","Title":"how to merge 200 csv files in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have FigureCanvasWxAgg instance with a figure displayed on a frame. If user clicks on the canvas another frame with a new FigureCanvasWxAgg containing the same figure will be shown. By now closing the new frame can result in destroying the C++ part of the figure so that it won't be available for the first frame. \nHow can I save the figure? Python deepcopy from copy module does't work in this case.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1759,"Q_Id":2513786,"Users Score":1,"Answer":"I'm not familiar with the inner workings, but could easily imagine how disposing of a frame damages the figure data. Is it expensive to draw? Otherwise I'd take the somewhat chickenish approach of simply redrawing it ;)","Q_Score":8,"Tags":"python,wxpython,copy,matplotlib","A_Id":3120753,"CreationDate":"2010-03-25T07:45:00.000","Title":"How to copy matplotlib figure?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a list containing the 3-D coords of a grid of regularly spaced points, each as a 3-element tuple. I'm looking for advice on the most efficient way to do this. \nIn C++ for instance, I simply loop over three nested loops, one for each coordinate. In Matlab, I would probably use the meshgrid function (which would do it in one command). I've read about meshgrid and mgrid in Python, and I've also read that using numpy's broadcasting rules is more efficient. It seems to me that using the zip function in combination with the numpy broadcast rules might be the most efficient way, but zip doesn't seem to be overloaded in numpy.","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":1877,"Q_Id":2518730,"Users Score":2,"Answer":"I would say go with meshgrid or mgrid, in particular if you need non-integer coordinates. I'm surprised that Numpy's broadcasting rules would be more efficient, as meshgrid was designed especially for the problem that you want to solve.","Q_Score":4,"Tags":"python,numpy","A_Id":2521903,"CreationDate":"2010-03-25T19:29:00.000","Title":"A 3-D grid of regularly spaced points","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Since Cassandra doesn't have MapReduce built in yet (I think it's coming in 0.7), is it dumb to try and MapReduce with my Python client or should I just use CouchDB or Mongo or something?\nThe application is stats collection, so I need to be able to sum values with grouping to increment counters. I'm not, but pretend I'm making Google analytics so I want to keep track of which browsers appear, which pages they went to, and visits vs. pageviews.\nI would just atomically update my counters on write, but Cassandra isn't very good at counters either.\nMay Cassandra just isn't the right choice for this?\nThanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1625,"Q_Id":2527173,"Users Score":6,"Answer":"Cassandra supports map reduce since version 0.6. (Current stable release is 0.5.1, but go ahead and try the new map reduce functionality in 0.6.0-beta3) To get started I recommend to take a look at the word count map reduce example in 'contrib\/word_count'.","Q_Score":4,"Tags":"python,mongodb,cassandra,couchdb,nosql","A_Id":2528683,"CreationDate":"2010-03-26T22:28:00.000","Title":"Is Using Python to MapReduce for Cassandra Dumb?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a application that does a certain experiment 1000 times (multi-threaded, so that multiple experiments are done at the same time). Every experiment needs appr. 50.000 random.random() calls.\nWhat is the best approach to get this really random. I could copy a random object to every experiment and do than a jumpahead of 50.000 * expid. The documentation suggests that jumpahead(1) already scrambles the state, but is that really true?\nOr is there another way to do this in 'the best way'?\n(No, the random numbers are not used for security, but for a metropolis hasting algorithm. The only requirement is that the experiments are independent, not whether the random sequence is somehow predictable or so)","AnswerCount":4,"Available Count":1,"Score":0.1488850336,"is_accepted":false,"ViewCount":2828,"Q_Id":2546039,"Users Score":3,"Answer":"jumpahead(1) is indeed sufficient (and identical to jumpahead(50000) or any other such call, in the current implementation of random -- I believe that came in at the same time as the Mersenne Twister based implementation). So use whatever argument fits in well with your programs' logic. (Do use a separate random.Random instance per thread for thread-safety purposes of course, as your question already hints).\n(random module generated numbers are not meant to be cryptographically strong, so it's a good thing that you're not using for security purposes;-).","Q_Score":4,"Tags":"python,random","A_Id":2546266,"CreationDate":"2010-03-30T14:35:00.000","Title":"How should I use random.jumpahead in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a data viewer in Python\/IPython like the variable editor in MATLAB?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2415,"Q_Id":2562697,"Users Score":0,"Answer":"in ipython, ipipe.igrid() can be used to view tabular data.","Q_Score":4,"Tags":"python,numpy,ipython","A_Id":2569226,"CreationDate":"2010-04-01T18:31:00.000","Title":"MATLAB-like variable editor in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a data viewer in Python\/IPython like the variable editor in MATLAB?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2415,"Q_Id":2562697,"Users Score":0,"Answer":"Even Pycharm will be a good option if you are looking for MATLAB like editor.","Q_Score":4,"Tags":"python,numpy,ipython","A_Id":28463696,"CreationDate":"2010-04-01T18:31:00.000","Title":"MATLAB-like variable editor in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind:\n1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy array, both in memory and access time wise?\n2) In some cases, these coordinates will be used to build a complex number, evaluated on a complex function, and the real part of this function will be used. Assuming that there is no way to separate real and complex parts of this function, and the real part will have to be used on the end, maybe is better to use directly complex numbers to store (x,y)? How bad is the overhead with the transformation from complex to real in python? The code in c++ does a lot of these transformations, and this is a big slowdown in that code.\n3) Also some coordinates transformations will have to be performed, and for the coordinates the x and y values will be accessed in separate, the transformation be done, and the result returned. The coordinate transformations are defined in the complex plane, so is still faster to use the components x and y directly than relying on the complex variables?\nThank you","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":6196,"Q_Id":2563773,"Users Score":3,"Answer":"A numpy array with an extra dimension is tighter in memory use, and at least as fast!, as a numpy array of tuples; complex numbers are at least as good or even better, including for your third question. BTW, you may have noticed that -- while questions asked later than yours were getting answers aplenty -- your was laying fallow: part of the reason is no doubt that asking three questions within a question turns responders off. Why not just ask one question per question? It's not as if you get charged for questions or anything, you know...!-)","Q_Score":10,"Tags":"python,arrays,numpy,tuples,complex-numbers","A_Id":2564787,"CreationDate":"2010-04-01T21:17:00.000","Title":"Better use a tuple or numpy array for storing coordinates","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind:\n1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy array, both in memory and access time wise?\n2) In some cases, these coordinates will be used to build a complex number, evaluated on a complex function, and the real part of this function will be used. Assuming that there is no way to separate real and complex parts of this function, and the real part will have to be used on the end, maybe is better to use directly complex numbers to store (x,y)? How bad is the overhead with the transformation from complex to real in python? The code in c++ does a lot of these transformations, and this is a big slowdown in that code.\n3) Also some coordinates transformations will have to be performed, and for the coordinates the x and y values will be accessed in separate, the transformation be done, and the result returned. The coordinate transformations are defined in the complex plane, so is still faster to use the components x and y directly than relying on the complex variables?\nThank you","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":6196,"Q_Id":2563773,"Users Score":7,"Answer":"In terms of memory consumption, numpy arrays are more compact than Python tuples.\nA numpy array uses a single contiguous block of memory. All elements of the numpy array must be of a declared type (e.g. 32-bit or 64-bit float.) A Python tuple does not necessarily use a contiguous block of memory, and the elements of the tuple can be arbitrary Python objects, which generally consume more memory than numpy numeric types.\nSo this issue is a hands-down win for numpy, (assuming the elements of the array can be stored as a numpy numeric type).\nOn the issue of speed, I think the choice boils down to the question, \"Can you vectorize your code?\"\nThat is, can you express your calculations as operations done on entire arrays element-wise. \nIf the code can be vectorized, then numpy will most likely be faster than Python tuples. (The only case I could imagine where it might not be, is if you had many very small tuples. In this case the overhead of forming the numpy arrays and one-time cost of importing numpy might drown-out the benefit of vectorization.)\nAn example of code that could not be vectorized would be if your calculation involved looking at, say, the first complex number in an array z, doing a calculation which produces an integer index idx, then retrieving z[idx], doing a calculation on that number, which produces the next index idx2, then retrieving z[idx2], etc. This type of calculation might not be vectorizable. In this case, you might as well use Python tuples, since you won't be able to leverage numpy's strength.\nI wouldn't worry about the speed of accessing the real\/imaginary parts of a complex number. My guess is the issue of vectorization will most likely determine which method is faster. (Though, by the way, numpy can transform an array of complex numbers to their real parts simply by striding over the complex array, skipping every other float, and viewing the result as floats. Moreover, the syntax is dead simple: If z is a complex numpy array, then z.real is the real parts as a float numpy array. This should be far faster than the pure Python approach of using a list comprehension of attribute lookups: [z.real for z in zlist].)\nJust out of curiosity, what is your reason for porting the C++ code to Python?","Q_Score":10,"Tags":"python,arrays,numpy,tuples,complex-numbers","A_Id":2564868,"CreationDate":"2010-04-01T21:17:00.000","Title":"Better use a tuple or numpy array for storing coordinates","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am porting code from Matlab to Python and am having trouble finding a replacement for the firls( ) routine. It is used for, least-squares linear-phase Finite Impulse Response (FIR) filter design.\nI looked at scipy.signal and nothing there looked like it would do the trick. Of course I was able to replace my remez and freqz algorithsm, so that's good.\nOn one blog I found an algorithm that implemented this filter without weighting, but I need one with weights.\nThanks, David","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4383,"Q_Id":2568707,"Users Score":0,"Answer":"It seems unlikely that you'll find exactly what you seek already written in Python, but perhaps the Matlab function's help page gives or references a description of the algorithm?","Q_Score":3,"Tags":"python,algorithm,math,matlab,digital-filter","A_Id":2569232,"CreationDate":"2010-04-02T19:23:00.000","Title":"Does Python\/Scipy have a firls( ) replacement (i.e. a weighted, least squares, FIR filter design)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Python 3.1.1 on Mac OS X 10.6.2 and need an interface to R. When browsing the internet I found out about RPy. Is this the right choice? \nCurrently, a program in Python computes a distance matrix and, stores it in a file. I invoke R separately in an interactive way and read in the matrix for cluster analysis. In order to\nsimplify computation one could prepare a script file for R then call it from Python and read back the results. Since I am new to Python, I would not like to go back to 2.6.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4111,"Q_Id":2573132,"Users Score":20,"Answer":"edit: Rewrite to summarize the edits that accumulated over time.\nThe current rpy2 release (2.3.x series) has full support for Python 3.3, while\nno claim is made about Python 3.0, 3.1, or 3.2.\nAt the time of writing the next rpy2 release (under development, 2.4.x series) is only supporting Python 3.3.\nHistory of Python 3 support:\n\nrpy2-2.1.0-dev \/ Python 3 branch in the repository - experimental support and application for a Google Summer of Code project consisting in porting rpy2 to Python 3 (under the Python umbrella)\napplication was accepted and thanks to Google's funding support for Python 3 slowly got into the main codebase (there was a fair bit of work still to be done after the GSoC - it made it for branch version_2.2.x).","Q_Score":12,"Tags":"python,r,interface,python-3.x","A_Id":2661971,"CreationDate":"2010-04-04T00:25:00.000","Title":"What is the best interface from Python 3.1.1 to R?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wish to make a Histogram in Matplotlib from an input file containing the raw data (.txt). I am facing issues in referring to the input file. I guess it should be a rather small program. Any Matplotlib gurus, any help ? \nI am not asking for the code, some inputs should put me on the right way !","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":14040,"Q_Id":2590328,"Users Score":0,"Answer":"You can't directly tell matplotlib to make a histogram from an input file - you'll need to open the file yourself and get the data from it. How you'd do that depends on the format of the file - if it's just a file with a number on each line, you can just go through each line, strip() spaces and newlines, and use float() to convert it to a number.","Q_Score":5,"Tags":"python,matplotlib,histogram","A_Id":2590381,"CreationDate":"2010-04-07T06:05:00.000","Title":"Histogram in Matplotlib with input file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Most of us know that the command random.randint(1,n) in Python (2.X.X) would generate a number in random (pseudo-random) between 1 and n. I am interested in knowing what is the upper limit for n ?","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":19740,"Q_Id":2597444,"Users Score":4,"Answer":"No doubt you have a bounded amount of memory, and address space, on your machine; for example, for a good 64-bit machine, 64 GB of RAM [[about 2**36 bytes]] and a couple of TB of disk (usable as swap space for virtual memory) [[about 2**41 bytes]]. So, the \"upper bound\" of a Python long integer will be the largest one representable in the available memory -- a bit less than 256**(2**40) if you are in absolutely no hurry and can swap like crazy, a bit more than 256**(2*36) (with just a little swapping but not too much) in practical terms.\nUnfortunately it would take quite a bit of time and space to represent these ridiculously humongous numbers in decimal, so, instead of showing them, let me check back with you -- why would you even care about such a ridiculous succession of digits as to constitute the \"upper bound\" you're inquiring about? I think it's more practical to put it this way: especially on a 64-bit machine with decent amounts of RAM and disk, upper bounds of long integers are way bigger than anything you'll ever compute. Technically, a mathematician would insist, they're not infinity, of course... but practically, they might as well be!-)","Q_Score":9,"Tags":"python,random","A_Id":2597749,"CreationDate":"2010-04-08T03:41:00.000","Title":"random.randint(1,n) in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In python the function random() generates a random float uniformly in the semi-open range [0.0, 1.0). In principle can it ever generate 0.0 (i.e. zero) and 1.0 (i.e. unity)? What is the scenario in practicality?","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":1409,"Q_Id":2621055,"Users Score":11,"Answer":"The [ indicates that 0.0 is included in the range of valid outputs. The ) indicates 1.0 is not in the range of valid outputs.","Q_Score":4,"Tags":"python,random","A_Id":2621082,"CreationDate":"2010-04-12T09:44:00.000","Title":"random() in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In python the function random() generates a random float uniformly in the semi-open range [0.0, 1.0). In principle can it ever generate 0.0 (i.e. zero) and 1.0 (i.e. unity)? What is the scenario in practicality?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1409,"Q_Id":2621055,"Users Score":13,"Answer":"0.0 can be generated; 1.0 cannot (since it isn't within the range, hence the ) as opposed to [).\nThe probability of generating 0.0 is equal to the probability of generating any other number within that range, namely, 1\/X where X is the number of different possible results. For a standard unsigned double-precision floating point, this usually means 53 bits of fractional component, for 2^53 possible combinations, leading to a 1\/(2^53) chance of generating exactly 0.0.\nSo while it's possible for it to return exactly 0.0, it's unlikely that you'll see it any time soon - but it's just as unlikely that you'd see exactly any other particular value you might choose in advance.","Q_Score":4,"Tags":"python,random","A_Id":2621096,"CreationDate":"2010-04-12T09:44:00.000","Title":"random() in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am curious if there is an algorithm\/method exists to generate keywords\/tags from a given text, by using some weight calculations, occurrence ratio or other tools.\nAdditionally, I will be grateful if you point any Python based solution \/ library for this.\nThanks","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":28812,"Q_Id":2661778,"Users Score":2,"Answer":"A very simple solution to the problem would be:\n\ncount the occurences of each word in the text\nconsider the most frequent terms as the key phrases\nhave a black-list of 'stop words' to remove common words like the, and, it, is etc\n\nI'm sure there are cleverer, stats based solutions though.\nIf you need a solution to use in a larger project rather than for interests sake, Yahoo BOSS has a key term extraction method.","Q_Score":50,"Tags":"python,tags,machine-learning,nlp,nltk","A_Id":2661789,"CreationDate":"2010-04-18T09:39:00.000","Title":"tag generation from a text content","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ...\nCurrently, I do statistical analysis on the data using a python\/numpy\/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer.\nI'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side.\nusers should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. \nI was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient.\nI'm looking forward to hearing your ideas\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1770,"Q_Id":2667537,"Users Score":1,"Answer":"I think you can utilize your current combination(python\/numpy\/matplotlib) fully if the number of users are not too big. I do some similar works, and my data size a little more than 10g. Data are stored in a few sqlite files, and i use numpy to analyze data, PIL\/matplotlib to generate chart files(png, gif), cherrypy as a webserver, mako as a template language. \nIf you need more server\/client database, then you can migrate to postgresql, but you can still fully use your current programs if you go with a python web framework, like cherrypy.","Q_Score":4,"Tags":"php,python,postgresql,statistics","A_Id":2669456,"CreationDate":"2010-04-19T12:58:00.000","Title":"Statistical analysis on large data set to be published on the web","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties?\nI am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are:\nto make a class \"Ball\" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on;\nto make an array of numbers for each property, then for each i-th \"ball\" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on;\nTo me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can.\nI am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C\/C++? What about a hybrid approach, e.g. Python with C++ extension?\nUpdate: I never expected any performance gain from parallel arrays per se, but in a mixed environment like Python + Numpy (or whatever SlowScriptingLanguage + FastNativeLibrary) using them may (or may not?) let you move more work out of you slow scripting code and into the fast native library.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1102,"Q_Id":2723790,"Users Score":1,"Answer":"I think it depends on what you're going to be doing with them, and how often you're going to be working with (all attributes of one particle) vs (one attribute of all particles). The former is better suited to the object approach; the latter is better suited to the array approach.\nI was facing a similar problem (although in a different domain) a couple of years ago. The project got deprioritized before I actually implemented this phase, but I was leaning towards a hybrid approach, where in addition to the Ball class I would have an Ensemble class. The Ensemble would not be a list or other simple container of Balls, but would have its own attributes (which would be arrays) and its own methods. Whether the Ensemble is created from the Balls, or the Balls from the Ensemble, depends on how you're going to construct them. \nOne of my coworkers was arguing for a solution where the fundamental object was an Ensemble which might contain only one Ball, so that no calling code would ever have to know whether you were operating on just one Ball (do you ever do that for your application?) or on many.","Q_Score":3,"Tags":"python,performance,data-structures,numpy","A_Id":2726598,"CreationDate":"2010-04-27T18:13:00.000","Title":"List of objects or parallel arrays of properties?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties?\nI am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are:\nto make a class \"Ball\" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on;\nto make an array of numbers for each property, then for each i-th \"ball\" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on;\nTo me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can.\nI am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C\/C++? What about a hybrid approach, e.g. Python with C++ extension?\nUpdate: I never expected any performance gain from parallel arrays per se, but in a mixed environment like Python + Numpy (or whatever SlowScriptingLanguage + FastNativeLibrary) using them may (or may not?) let you move more work out of you slow scripting code and into the fast native library.","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1102,"Q_Id":2723790,"Users Score":2,"Answer":"Having an object for each ball in this example is certainly better design. Parallel arrays are really a workaround for languages that do not support proper objects. I wouldn't use them in a language with OO capabilities unless it's a tiny case that fits within a function (and maybe not even then) or if I've run out of every other optimization option and the profiler shows that property access is the culprit. This applies twice as much to Python as to C++, as the former places a large emphasis on readability and elegance.","Q_Score":3,"Tags":"python,performance,data-structures,numpy","A_Id":2723845,"CreationDate":"2010-04-27T18:13:00.000","Title":"List of objects or parallel arrays of properties?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"my problem is the GIL of course. While I'm analysing data it would be nice to present some plots in between (so it's not too boring waiting for results)\nBut the GIL prevents this (and this is bringing me to the point of asking myself if Python was such a good idea in the first place).\nI can only display the plot, wait till the user closes it and commence calculations after that. A waste of time obviously.\nI already tried the subprocess and multiprocessing modules but can't seem to get them to work. \nAny thoughts on this one?\nThanks\nEdit: Ok so it's not the GIL but show().","AnswerCount":4,"Available Count":3,"Score":0.1488850336,"is_accepted":false,"ViewCount":737,"Q_Id":2744530,"Users Score":3,"Answer":"I think you'll need to put the graph into a proper Windowing system, rather than relying on the built-in show code.\nMaybe sticking the .show() in another thread would be sufficient?\nThe GIL is irrelevant - you've got a blocking show() call, so you need to handle that first.","Q_Score":2,"Tags":"python,matplotlib,parallel-processing,gil","A_Id":2744657,"CreationDate":"2010-04-30T12:41:00.000","Title":"Python: Plot some data (matplotlib) without GIL","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"my problem is the GIL of course. While I'm analysing data it would be nice to present some plots in between (so it's not too boring waiting for results)\nBut the GIL prevents this (and this is bringing me to the point of asking myself if Python was such a good idea in the first place).\nI can only display the plot, wait till the user closes it and commence calculations after that. A waste of time obviously.\nI already tried the subprocess and multiprocessing modules but can't seem to get them to work. \nAny thoughts on this one?\nThanks\nEdit: Ok so it's not the GIL but show().","AnswerCount":4,"Available Count":3,"Score":0.1488850336,"is_accepted":false,"ViewCount":737,"Q_Id":2744530,"Users Score":3,"Answer":"This has nothing to do with the GIL, just modify your analysis code to make it update the graph from time to time (for example every N iterations).\nOnly then if you see that drawing the graph slows the analysis code too much, put the graph update code in a subprocess with multiprocessing.","Q_Score":2,"Tags":"python,matplotlib,parallel-processing,gil","A_Id":2744604,"CreationDate":"2010-04-30T12:41:00.000","Title":"Python: Plot some data (matplotlib) without GIL","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"my problem is the GIL of course. While I'm analysing data it would be nice to present some plots in between (so it's not too boring waiting for results)\nBut the GIL prevents this (and this is bringing me to the point of asking myself if Python was such a good idea in the first place).\nI can only display the plot, wait till the user closes it and commence calculations after that. A waste of time obviously.\nI already tried the subprocess and multiprocessing modules but can't seem to get them to work. \nAny thoughts on this one?\nThanks\nEdit: Ok so it's not the GIL but show().","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":737,"Q_Id":2744530,"Users Score":2,"Answer":"It seems like the draw() method can circumvent the need for show(). \nThe only reason left for .show() in the script is to let it do the blocking part so that the images don't disapear when the script reaches its end.","Q_Score":2,"Tags":"python,matplotlib,parallel-processing,gil","A_Id":2744906,"CreationDate":"2010-04-30T12:41:00.000","Title":"Python: Plot some data (matplotlib) without GIL","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. \nMy understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?","AnswerCount":6,"Available Count":5,"Score":0.0333209931,"is_accepted":false,"ViewCount":2365,"Q_Id":2770030,"Users Score":1,"Answer":"what do you mean by \"file manipulation?\" are you talking about moving files around, deleting, copying, etc., in which case i would use a shell, e.g., bash, etc. if you're talking about reading in the data, performing calculations, perhaps writing out a new file, etc., then you could probably use Python or R. unless maintenance is an issue, i would just leave it as R and find other fish to fry as you're not going to see enough of a speedup to justify your time and effort in porting that code.","Q_Score":3,"Tags":"python,file,r,performance","A_Id":2770393,"CreationDate":"2010-05-05T01:24:00.000","Title":"R or Python for file manipulation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. \nMy understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?","AnswerCount":6,"Available Count":5,"Score":0.0333209931,"is_accepted":false,"ViewCount":2365,"Q_Id":2770030,"Users Score":1,"Answer":"Know where the time is being spent. If your R scripts are bottlenecked on disk IO (and that is very possible in this case), then you could rewrite them in hand-optimized assembly and be no faster. As always with optimization, if you don't measure first, you're just pissing into the wind. If they're not bottlenecked on disk IO, you would likely see more benefit from improving the algorithm than changing the language.","Q_Score":3,"Tags":"python,file,r,performance","A_Id":2770138,"CreationDate":"2010-05-05T01:24:00.000","Title":"R or Python for file manipulation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. \nMy understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?","AnswerCount":6,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":2365,"Q_Id":2770030,"Users Score":0,"Answer":"My guess is that you probably won't see much of a speed-up in time. When comparing high-level languages, overhead in the language is typically not to blame for performance problems. Typically, the problem is your algorithm.\nI'm not very familiar with R, but you may find speed-ups by reading larger chunks of data into memory at once vs smaller chunks (less system calls). If R doesn't have the ability to change something like this, you will probably find that python can be much faster simply because of this ability.","Q_Score":3,"Tags":"python,file,r,performance","A_Id":2770071,"CreationDate":"2010-05-05T01:24:00.000","Title":"R or Python for file manipulation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. \nMy understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?","AnswerCount":6,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":2365,"Q_Id":2770030,"Users Score":0,"Answer":"R data manipulation has rules for it to be fast. The basics are: \n\nvectorize\nuse data.frames as little as possible (for example, in the end)\n\nSearch for R time optimization and profiling and you will find many resources to help you.","Q_Score":3,"Tags":"python,file,r,performance","A_Id":2771903,"CreationDate":"2010-05-05T01:24:00.000","Title":"R or Python for file manipulation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. \nMy understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?","AnswerCount":6,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":2365,"Q_Id":2770030,"Users Score":10,"Answer":"I write in both R and Python regularly. I find Python modules for writing, reading and parsing information easier to use, maintain and update. Little niceties like the way python lets you deal with lists of items over R's indexing make things much easier to read.\nI highly doubt you will gain any significant speed-up by switching the language. If you are becoming the new \"maintainer\" of these scripts and you find Python easier to understand and extend, then I'd say go for it.\nComputer time is cheap ... programmer time is expensive. If you have other things to do then I'd just limp along with what you've got until you have a free day to putz with them.\nHope that helps.","Q_Score":3,"Tags":"python,file,r,performance","A_Id":2770354,"CreationDate":"2010-05-05T01:24:00.000","Title":"R or Python for file manipulation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a raster file (basically 2D array) with close to a million points. I am trying to extract a circle from the raster (and all the points that lie within the circle). Using ArcGIS is exceedingly slow for this. Can anyone suggest any image processing library that is both easy to learn and powerful and quick enough for something like this?\nThanks!","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":3334,"Q_Id":2770356,"Users Score":1,"Answer":"You need a library that can read your raster. I am not sure how to do that in Python but you could look at geotools (especially with some of the new raster library integration) if you want to program in Java. If you are good with C I would reccomend using something like GDAL.\nIf you want to look at a desktop tool you could look at extending QGIS with python to do the operation above.\nIf I remember correctly, the Raster extension to PostGIS may support clipping rasters based upon vectors. This means you would need to create your circles to features in the DB and then import your raster but then you might be able to use SQL to extract your values.\nIf you are really just a text file with numbers in a grid then I would go with the suggestions above.","Q_Score":4,"Tags":"python,arcgis,raster","A_Id":2785460,"CreationDate":"2010-05-05T03:02:00.000","Title":"Extract points within a shape from a raster","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an application in which I need a structure to represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges per node) in memory. The edges representation will contain some attributes of the relation. \nI have tried a memory map representation, arrays, dictionaries and strings to represent that structure in memory, but these always crash because of the memory limit.\nI would like to get an advice of how I can represent this, or something similar.\nBy the way, I'm using python.","AnswerCount":8,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":6997,"Q_Id":2806806,"Users Score":14,"Answer":"If that is 100-600 edges\/node, then you are talking about 3.6 billion edges.\nWhy does this have to be all in memory? \nCan you show us the structures you are currently using?\nHow much memory are we allowed (what is the memory limit you are hitting?)\n\nIf the only reason you need this in memory is because you need to be able to read and write it fast, then use a database. Databases read and write extremely fast, often they can read without going to disk at all.","Q_Score":9,"Tags":"python,memory,data-structures,graph","A_Id":2806868,"CreationDate":"2010-05-10T22:10:00.000","Title":"Huge Graph Structure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an application in which I need a structure to represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges per node) in memory. The edges representation will contain some attributes of the relation. \nI have tried a memory map representation, arrays, dictionaries and strings to represent that structure in memory, but these always crash because of the memory limit.\nI would like to get an advice of how I can represent this, or something similar.\nBy the way, I'm using python.","AnswerCount":8,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":6997,"Q_Id":2806806,"Users Score":4,"Answer":"I doubt you'll be able to use a memory structure unless you have a LOT of memory at your disposal:\nAssume you are talking about 600 directed edges from each node, with a node being 4-bytes (integer key) and a directed edge being JUST the destination node keys (4 bytes each).\nThen the raw data about each node is 4 + 600 * 4 = 2404 bytes x 6,000,000 = over 14.4GB\nThat's without any other overheads or any additional data in the nodes (or edges).","Q_Score":9,"Tags":"python,memory,data-structures,graph","A_Id":2806891,"CreationDate":"2010-05-10T22:10:00.000","Title":"Huge Graph Structure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an application in which I need a structure to represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges per node) in memory. The edges representation will contain some attributes of the relation. \nI have tried a memory map representation, arrays, dictionaries and strings to represent that structure in memory, but these always crash because of the memory limit.\nI would like to get an advice of how I can represent this, or something similar.\nBy the way, I'm using python.","AnswerCount":8,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":6997,"Q_Id":2806806,"Users Score":0,"Answer":"Sounds like you need a database and an iterator over the results. Then you wouldn't have to keep it all in memory at the same time but you could always have access to it.","Q_Score":9,"Tags":"python,memory,data-structures,graph","A_Id":2806909,"CreationDate":"2010-05-10T22:10:00.000","Title":"Huge Graph Structure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a regression model in which the dependent variable is continuous but ninety percent of the independent variables are categorical(both ordered and unordered) and around thirty percent of the records have missing values(to make matters worse they are missing randomly without any pattern, that is, more that forty five percent of the data hava at least one missing value). There is no a priori theory to choose the specification of the model so one of the key tasks is dimension reduction before running the regression. While I am aware of several methods for dimension reduction for continuous variables I am not aware of a similar statical literature for categorical data (except, perhaps, as a part of correspondence analysis which is basically a variation of principal component analysis on frequency table). Let me also add that the dataset is of moderate size 500000 observations with 200 variables. I have two questions.\n\nIs there a good statistical reference out there for dimension reduction for categorical data along with robust imputation (I think the first issue is imputation and then dimension reduction)?\nThis is linked to implementation of above problem. I have used R extensively earlier and tend to use transcan and impute function heavily for continuous variables and use a variation of tree method to impute categorical values. I have a working knowledge of Python so if something is nice out there for this purpose then I will use it. Any implementation pointers in python or R will be of great help.\nThank you.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":16303,"Q_Id":2837850,"Users Score":0,"Answer":"45% of the data have at least one missing value, you say. This is impressive. I would first look if there is no pattern. You say they are missing at random. Have you tested for MAR ? Have you tested for MAR for sub-groups ? \nNot knowing your data I would first look if there are not cases with many missing values and see if there are theoretical or practical reasons to exclude them. Practical reasons are the production of the data. They might be that they were not well observed, the machine producing the data did not turn all the time, the survey did not cover all countries all the time, etc. For instance, you have survey data on current occupation, but part of the respondents are retired. So they have to be (system-)missing. You can not replace these data with some computed value.\nMaybe you can cut slices out of the cases with full and look for the conditions of data production.","Q_Score":24,"Tags":"python,r,statistics","A_Id":53035704,"CreationDate":"2010-05-14T21:50:00.000","Title":"Dimension Reduction in Categorical Data with missing values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I bumped into a case where I need a big (=huge) python dictionary, which turned to be quite memory-consuming.\nHowever, since all of the values are of a single type (long) - as well as the keys, I figured I can use python (or numpy, doesn't really matter) array for the values ; and wrap the needed interface (in: x ; out: d[x]) with an object which actually uses these arrays for the keys and values storage.\nI can use a index-conversion object (input --> index, of 1..n, where n is the different-values counter), and return array[index]. I can elaborate on some techniques of how to implement such an indexing-methods with reasonable memory requirement, it works and even pretty good. \nHowever, I wonder if there is such a data-structure-object already exists (in python, or wrapped to python from C\/++), in any package (I checked collections, and some Google searches).\nAny comment will be welcome, thanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1303,"Q_Id":2942375,"Users Score":0,"Answer":"You might try using std::map. Boost.Python provides a Python wrapping for std::map out-of-the-box.","Q_Score":3,"Tags":"python,arrays,data-structures,dictionary","A_Id":2943067,"CreationDate":"2010-05-31T08:47:00.000","Title":"python dictionary with constant value-type","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have the following 4 arrays ( grouped in 2 groups ) that I would like to merge in ascending order by the keys array. \nI can use also dictionaries as structure if it is easier.\nHas python any command or something to make this quickly possible?\nRegards\nMN\n\n# group 1\n[7, 2, 3, 5] #keys\n[10,11,12,26] #values \n\n[0, 4] #keys\n[20, 33] #values \n\n# I would like to have\n[ 0, 2, 3, 4, 5, 7 ] # ordered keys\n[20, 11,12,33,26,33] # associated values","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":125,"Q_Id":2960855,"Users Score":0,"Answer":"Add all keys and associated values from both sets of data to a single dictionary. \nGet the items of the dictionary ans sort them.\nprint out the answer.\nk1=[7, 2, 3, 5]\nv1=[10,11,12,26]\nk2=[0, 4]\nv2=[20, 33] \nd=dict(zip(k1,v1))\nd.update(zip(k2,v2))\nanswer=d.items()\nanswer.sort()\nkeys=[k for (k,v) in answer]\nvalues=[v for (k,v) in answer]\nprint keys\nprint values\n\nEdit: This is for Python 2.6 or below which do not have any ordered dictionary.","Q_Score":1,"Tags":"python","A_Id":2960944,"CreationDate":"2010-06-02T19:16:00.000","Title":"merging in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I two programs running, one in Python and one in C++, and I need to share a two-dimensional array (just of decimal numbers) between them. I am currently looking into serialization, but pickle is python-specific, unfortunately. What is the best way to do this?\nThanks\nEdit: It is likely that the array will only have 50 elements or so, but the transfer of data will need to occur very frequently: 60x per second or more.","AnswerCount":7,"Available Count":2,"Score":0.0285636566,"is_accepted":false,"ViewCount":2433,"Q_Id":2968172,"Users Score":1,"Answer":"I would propose simply to use c arrays(via ctypes on the python side) and simply pull\/push the raw data through an socket","Q_Score":3,"Tags":"c++,python,serialization","A_Id":2968306,"CreationDate":"2010-06-03T17:06:00.000","Title":"How to share an array in Python with a C++ Program?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I two programs running, one in Python and one in C++, and I need to share a two-dimensional array (just of decimal numbers) between them. I am currently looking into serialization, but pickle is python-specific, unfortunately. What is the best way to do this?\nThanks\nEdit: It is likely that the array will only have 50 elements or so, but the transfer of data will need to occur very frequently: 60x per second or more.","AnswerCount":7,"Available Count":2,"Score":0.0285636566,"is_accepted":false,"ViewCount":2433,"Q_Id":2968172,"Users Score":1,"Answer":"Serialization is one problem while IPC is another. Do you have the IPC portion figured out? (pipes, sockets, mmap, etc?)\nOn to serialization - if you're concerned about performance more than robustness (being able to plug more modules into this architecture) and security, then you should take a look at the struct module. This will let you pack data into C structures using format strings to define the structure (takes care of padding, alignment, and byte ordering for you!) In the C++ program, cast a pointer to the buffer to the corresponding structure type. \nThis works well with a tightly-coupled Python script and C++ program that is only run internally.","Q_Score":3,"Tags":"c++,python,serialization","A_Id":2968375,"CreationDate":"2010-06-03T17:06:00.000","Title":"How to share an array in Python with a C++ Program?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to generate random numbers from a gaussian distribution. Python has the very useful random.gauss() method, but this is only a one-dimensional random variable. How could I programmatically generate random numbers from this distribution in n-dimensions?\nFor example, in two dimensions, the return value of this method is essentially distance from the mean, so I would still need (x,y) coordinates to determine an actual data point. I suppose I could generate two more random numbers, but I'm not sure how to set up the constraints.\nI appreciate any insights. Thanks!","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":5620,"Q_Id":2969593,"Users Score":5,"Answer":"Numpy has multidimensional equivalents to the functions in the random module \nThe function you're looking for is numpy.random.normal","Q_Score":2,"Tags":"python,random,n-dimensional","A_Id":2969618,"CreationDate":"2010-06-03T20:40:00.000","Title":"Generate n-dimensional random numbers in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to generate random numbers from a gaussian distribution. Python has the very useful random.gauss() method, but this is only a one-dimensional random variable. How could I programmatically generate random numbers from this distribution in n-dimensions?\nFor example, in two dimensions, the return value of this method is essentially distance from the mean, so I would still need (x,y) coordinates to determine an actual data point. I suppose I could generate two more random numbers, but I'm not sure how to set up the constraints.\nI appreciate any insights. Thanks!","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":5620,"Q_Id":2969593,"Users Score":1,"Answer":"You need to properly decompose your multi-dimensional distribution into a composition of one-dimensional distributions. For example, if you want a point at a Gaussian-distributed distance from a given center and a uniformly-distributed angle around it, you'll get the polar coordinates for the delta with a Gaussian rho and a uniform theta (between 0 and 2 pi), then, if you want cartesian coordinates, you of course do a coordinate transformation.","Q_Score":2,"Tags":"python,random,n-dimensional","A_Id":2969634,"CreationDate":"2010-06-03T20:40:00.000","Title":"Generate n-dimensional random numbers in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've increased the font of my ticklabels successfully, but now they're too close to the axis. I'd like to add a little breathing room between the ticklabels and the axis.","AnswerCount":5,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":89545,"Q_Id":2969867,"Users Score":117,"Answer":"If you don't want to change the spacing globally (by editing your rcParams), and want a cleaner approach, try this:\nax.tick_params(axis='both', which='major', pad=15)\nor for just x axis\nax.tick_params(axis='x', which='major', pad=15)\nor the y axis\nax.tick_params(axis='y', which='major', pad=15)","Q_Score":86,"Tags":"python,matplotlib","A_Id":29524883,"CreationDate":"2010-06-03T21:19:00.000","Title":"How do I add space between the ticklabels and the axes in matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers.\nRight now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by key.\nMy question is what is the best way to do this if the CSV file increases to 100K+ records?\nWould it be more efficient to convert the account mappings from a CSV to a sqlite database rather than storing them as a dict in memory?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":109,"Q_Id":2980257,"Users Score":1,"Answer":"As long as they will all fit in memory, a dict will be the most efficient solution. It's also a lot easier to code. 100k records should be no problem on a modern computer.\nYou are right that switching to an SQLite database is a good choice when the number of records gets very large.","Q_Score":3,"Tags":"python,database,sqlite,dictionary,csv","A_Id":2980269,"CreationDate":"2010-06-05T12:08:00.000","Title":"Efficient way to access a mapping of identifiers in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing a project with reasonalby big DataBase. It's not a probper DB file, but a class with format as follows:\nDataBase.Nodes.Data=[[] for i in range(1,1000)] f.e. this DataBase is all together something like few thousands rows. Fisrt question - is the way I'm doing efficient, or is it better to use SQL, or any other \"proper\" DB, which I've never used actually. \nAnd the main question - I'd like to save my DataBase class with all record, and then re-open it with Python in another session. Is that possible, what tool should I use? cPickle - it seems to be only for strings, any other?\nIn matlab there's very useful functionality named save workspace - it saves all Your variables to a file that You can open at another session - this would be vary useful in python!","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":306,"Q_Id":2990995,"Users Score":3,"Answer":"Pickle (cPickle) can handle any (picklable) Python object. So as long, as you're not trying to pickle thread or filehandle or something like that, you're ok.","Q_Score":3,"Tags":"python,serialization,pickle,object-persistence","A_Id":2991030,"CreationDate":"2010-06-07T15:52:00.000","Title":"How to save big \"database-like\" class in python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What are some fast and somewhat reliable ways to extract information about images? I've been tinkering with OpenCV and this seems so far to be the best route plus it has Python bindings.\nSo to be more specific I'd like to determine what I can about what's in an image. So for example the haar face detection and full body detection classifiers are great - now I can tell that most likely there are faces and \/ or people in the image as well as about how many. \nokay - what else - how about whether there are any buildings and if so what do they seem to be - huts, office buildings etc? Is there sky visible, grass, trees and so forth. \nFrom what I've read about training classifiers to detect objects, it seems like a rather laborious process 10,000 or so wrong images and 5,000 or so correct samples to train a classifier. \nI'm hoping that there are some decent ones around already instead of having to do this all myself for a bunch of different objects - or is there some other way to go about this sort of thing?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1546,"Q_Id":2994398,"Users Score":2,"Answer":"Your question is difficult to answer without more clarification about the types of images you are analyzing and your purpose.\nThe tone of the post seems that you are interested in tinkering -- that's fine. If you want to tinker, one example application might be iris identification using wavelet analysis. You can also try motion tracking; I've done that in OpenCV using the sample projects, and it is kind of interesting. You can try image segmentation for the purpose of scene analysis; take an outdoor photo and segment the image according to texture and\/or color.\nThere is no hard number for how large your training set must be. It is highly application dependent. A few hundred images may suffice.","Q_Score":1,"Tags":"python,image,opencv,identification","A_Id":2994438,"CreationDate":"2010-06-08T02:17:00.000","Title":"Extracting Information from Images","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny.\nI write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. \nFor example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind.\nWhat other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1855,"Q_Id":3012157,"Users Score":0,"Answer":"The main assumptions are about the amount of cpu\/cache\/ram\/storage\/bandwidth you can have in a single machine at an acceptable price. There are lots of answers here at stackoverflow still based on the old assumptions of a 32 bit machine with 4G ram and about a terabyte of storage and 1Gb network. With 16GB DDR-3 ram modules at 220 Eur, 512 GB ram, 48 core machines can be build at reasonable prices. The switch from hard disks to SSD is another important change.","Q_Score":21,"Tags":"python,large-data-volumes,scientific-computing","A_Id":9964718,"CreationDate":"2010-06-10T06:34:00.000","Title":"what changes when your input is giga\/terabyte sized?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny.\nI write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. \nFor example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind.\nWhat other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1855,"Q_Id":3012157,"Users Score":1,"Answer":"While some languages have naturally lower memory overhead in their types than others, that really doesn't matter for data this size - you're not holding your entire data set in memory regardless of the language you're using, so the \"expense\" of Python is irrelevant here. As you pointed out, there simply isn't enough address space to even reference all this data, let alone hold onto it.\nWhat this normally means is either a) storing your data in a database, or b) adding resources in the form of additional computers, thus adding to your available address space and memory. Realistically you're going to end up doing both of these things. One key thing to keep in mind when using a database is that a database isn't just a place to put your data while you're not using it - you can do WORK in the database, and you should try to do so. The database technology you use has a large impact on the kind of work you can do, but an SQL database, for example, is well suited to do a lot of set math and do it efficiently (of course, this means that schema design becomes a very important part of your overall architecture). Don't just suck data out and manipulate it only in memory - try to leverage the computational query capabilities of your database to do as much work as possible before you ever put the data in memory in your process.","Q_Score":21,"Tags":"python,large-data-volumes,scientific-computing","A_Id":3012350,"CreationDate":"2010-06-10T06:34:00.000","Title":"what changes when your input is giga\/terabyte sized?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny.\nI write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. \nFor example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind.\nWhat other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1855,"Q_Id":3012157,"Users Score":18,"Answer":"I'm currently engaged in high-performance computing in a small corner of the oil industry and regularly work with datasets of the orders of magnitude you are concerned about. Here are some points to consider:\n\nDatabases don't have a lot of traction in this domain. Almost all our data is kept in files, some of those files are based on tape file formats designed in the 70s. I think that part of the reason for the non-use of databases is historic; 10, even 5, years ago I think that Oracle and its kin just weren't up to the task of managing single datasets of O(TB) let alone a database of 1000s of such datasets.\nAnother reason is a conceptual mismatch between the normalisation rules for effective database analysis and design and the nature of scientific data sets.\nI think (though I'm not sure) that the performance reason(s) are much less persuasive today. And the concept-mismatch reason is probably also less pressing now that most of the major databases available can cope with spatial data sets which are generally a much closer conceptual fit to other scientific datasets. I have seen an increasing use of databases for storing meta-data, with some sort of reference, then, to the file(s) containing the sensor data.\nHowever, I'd still be looking at, in fact am looking at, HDF5. It has a couple of attractions for me (a) it's just another file format so I don't have to install a DBMS and wrestle with its complexities, and (b) with the right hardware I can read\/write an HDF5 file in parallel. (Yes, I know that I can read and write databases in parallel too).\nWhich takes me to the second point: when dealing with very large datasets you really need to be thinking of using parallel computation. I work mostly in Fortran, one of its strengths is its array syntax which fits very well onto a lot of scientific computing; another is the good support for parallelisation available. I believe that Python has all sorts of parallelisation support too so it's probably not a bad choice for you.\nSure you can add parallelism on to sequential systems, but it's much better to start out designing for parallelism. To take just one example: the best sequential algorithm for a problem is very often not the best candidate for parallelisation. You might be better off using a different algorithm, one which scales better on multiple processors. Which leads neatly to the next point.\nI think also that you may have to come to terms with surrendering any attachments you have (if you have them) to lots of clever algorithms and data structures which work well when all your data is resident in memory. Very often trying to adapt them to the situation where you can't get the data into memory all at once, is much harder (and less performant) than brute-force and regarding the entire file as one large array.\nPerformance starts to matter in a serious way, both the execution performance of programs, and developer performance. It's not that a 1TB dataset requires 10 times as much code as a 1GB dataset so you have to work faster, it's that some of the ideas that you will need to implement will be crazily complex, and probably have to be written by domain specialists, ie the scientists you are working with. Here the domain specialists write in Matlab.\n\nBut this is going on too long, I'd better get back to work","Q_Score":21,"Tags":"python,large-data-volumes,scientific-computing","A_Id":3012599,"CreationDate":"2010-06-10T06:34:00.000","Title":"what changes when your input is giga\/terabyte sized?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I cannot understand it. Very simple, and obvious functionality:\nYou have a code in any programming language, You run it. In this code You generate variables, than You save them (the values, names, namely everything) to a file, with one command. When it's saved You may open such a file in Your code also with simple command. \nIt works perfect in matlab (save Workspace , load Workspace ) - in python there's some weird \"pickle\" protocol, which produces errors all the time, while all I want to do is save variable, and load it again in another session (?????) \nf.e. You cannot save class with variables (in Matlab there's no problem)\nYou cannot load arrays in cPickle (but YOu can save them (?????) )\nWhy don't make it easier?\nIs there a way to save the current variables with values, and then load them?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":8265,"Q_Id":3016116,"Users Score":0,"Answer":"I take issue with the statement that the saving of variables in Matlab is an environment function. the \"save\" statement in matlab is a function and part of the matlab language not just a command. It is a very useful function as you don't have to worry about the trivial minutia of file i\/o and it handles all sorts of variables from scalar, matrix, objects, structures.","Q_Score":3,"Tags":"python,serialization","A_Id":27096538,"CreationDate":"2010-06-10T15:58:00.000","Title":"Save Workspace - save all variables to a file. Python doesn't have it)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I cannot understand it. Very simple, and obvious functionality:\nYou have a code in any programming language, You run it. In this code You generate variables, than You save them (the values, names, namely everything) to a file, with one command. When it's saved You may open such a file in Your code also with simple command. \nIt works perfect in matlab (save Workspace , load Workspace ) - in python there's some weird \"pickle\" protocol, which produces errors all the time, while all I want to do is save variable, and load it again in another session (?????) \nf.e. You cannot save class with variables (in Matlab there's no problem)\nYou cannot load arrays in cPickle (but YOu can save them (?????) )\nWhy don't make it easier?\nIs there a way to save the current variables with values, and then load them?","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":8265,"Q_Id":3016116,"Users Score":2,"Answer":"What you are describing is Matlab environment feature not a programming language. \nWhat you need is a way to store serialized state of some object which could be easily done in almost any programming language. In python world pickle is the easiest way to achieve it and if you could provide more details about the errors it produces for you people would probably be able to give you more details on that.\nIn general for object oriented languages (including python) it is always a good approach to incapsulate a your state into single object that could be serialized and de-serialized and then store\/load an instance of such class. Pickling and unpickling of such objects works perfectly for many developers so this must be something specific to your implementation.","Q_Score":3,"Tags":"python,serialization","A_Id":3016188,"CreationDate":"2010-06-10T15:58:00.000","Title":"Save Workspace - save all variables to a file. Python doesn't have it)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed matplotlib using the Mac disk image installer for MacOS 10.5 and Python 2.5. I installed numpy then tried to import matplotlib but got this error: ImportError: numpy 1.1 or later is required; you have 2.0.0.dev8462. It seems to that version 2.0.0.dev8462 would be later than version 1.1 but I am guessing that matplotlib got confused with the \".dev8462\" in the version. Is there any workaround to this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1283,"Q_Id":3035028,"Users Score":0,"Answer":"Following Justin's comment ... here is the equivalent file for Linux:\n\/usr\/lib\/pymodules\/python2.6\/matplotlib\/__init__.py\nsudo edit that to fix the troublesome line to:\nif not ((int(nn[0]) >= 1 and int(nn[1]) >= 1) or int(nn[0]) >= 2):\nThanks Justin Peel!","Q_Score":1,"Tags":"python,installation,numpy,matplotlib","A_Id":5926995,"CreationDate":"2010-06-14T04:59:00.000","Title":"Can't import matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently parsing CSV tables and need to discover the \"data types\" of the columns. I don't know the exact format of the values. Obviously, everything that the CSV parser outputs is a string. The data types I am currently interested in are:\n\ninteger\nfloating point\ndate\nboolean\nstring \n\nMy current thoughts are to test a sample of rows (maybe several hundred?) in order to determine the types of data present through pattern matching. \nI am particularly concerned about the date data type - is their a python module for parsing common date idioms (obviously I will not be able to detect them all)? \nWhat about integers and floats?","AnswerCount":5,"Available Count":1,"Score":0.1586485043,"is_accepted":false,"ViewCount":4235,"Q_Id":3098337,"Users Score":4,"Answer":"ast.literal_eval() can get the easy ones.","Q_Score":2,"Tags":"python,parsing,csv,input,types","A_Id":3098439,"CreationDate":"2010-06-23T01:31:00.000","Title":"Method for guessing type of data represented currently represented as strings","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've read a number of questions on finding the colour palette of an image, but my problem is slightly different. I'm looking for images made up of pure colours: pictures of the open sky, colourful photo backgrounds, red brick walls etc.\nSo far I've used the App Engine Image.histogram() function to produce a histogram, filter out values below a certain occurrence threshold, and average the remaining ones down. That still seems to leave in a lot of extraneous photographs where there are blobs of pure colour in a mixed bag of other photos.\nAny ideas much appreciated!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":381,"Q_Id":3106788,"Users Score":1,"Answer":"How about doing this?\n\nBlur the image using some fast blurring algorithm. (Search for stack blur or box blur)\nCompute standard deviation of the pixels in RGB domain, once for each color.\nDiscard the image if the standard deviation is beyond a certain threshold.","Q_Score":1,"Tags":"python,image,image-processing,colors","A_Id":3106854,"CreationDate":"2010-06-24T02:02:00.000","Title":"Finding images with pure colours","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a MacBook Pro with Snow Leopard, and the Python 2.6 distribution that comes standard. Numpy does not work properly on it. Loadtxt gives errors of the filename being too long, and getfromtxt does not work at all (no object in module error). So then I tried downloading the py26-numpy port on MacPorts. Of course when I use python, it defaults the mac distribution. How can I switch it to use the latest and greatest from MacPorts. This seems so much simpler than building all the tools I need from source...\nThanks!","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":645,"Q_Id":3134332,"Users Score":1,"Answer":"You need to update your PATH so that the stuff from MacPorts is in front of the standard system directories, e.g., export PATH=\/opt\/local\/bin:\/opt\/local\/sbin:\/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/Current\/bin\/:$PATH.\nUPDATE: Pay special attention to the fact that \/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/Current\/bin is in front of your old PATH value.","Q_Score":1,"Tags":"python,osx-snow-leopard,numpy,macports","A_Id":3134369,"CreationDate":"2010-06-28T16:43:00.000","Title":"Switch python distributions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am looking for references (tutorials, books, academic literature) concerning structuring unstructured text in a manner similar to the google calendar quick add button.\nI understand this may come under the NLP category, but I am interested only in the process of going from something like \"Levi jeans size 32 A0b293\"\nto: Brand: Levi, Size: 32, Category: Jeans, code: A0b293\nI imagine it would be some combination of lexical parsing and machine learning techniques.\nI am rather language agnostic but if pushed would prefer python, Matlab or C++ references\nThanks","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":7405,"Q_Id":3162450,"Users Score":1,"Answer":"Possibly look at \"Collective Intelligence\" by Toby Segaran. I seem to remember that addressing the basics of this in one chapter.","Q_Score":8,"Tags":"python,nlp,structured-data","A_Id":3166594,"CreationDate":"2010-07-01T23:48:00.000","Title":"Unstructured Text to Structured Data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for references (tutorials, books, academic literature) concerning structuring unstructured text in a manner similar to the google calendar quick add button.\nI understand this may come under the NLP category, but I am interested only in the process of going from something like \"Levi jeans size 32 A0b293\"\nto: Brand: Levi, Size: 32, Category: Jeans, code: A0b293\nI imagine it would be some combination of lexical parsing and machine learning techniques.\nI am rather language agnostic but if pushed would prefer python, Matlab or C++ references\nThanks","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":7405,"Q_Id":3162450,"Users Score":0,"Answer":"If you are only working for cases like the example you cited, you are better off using some manual rule-based that is 100% predictable and covers 90% of the cases it might encounter production.. \nYou could enumerable lists of all possible brands and categories and detect which is which in an input string cos there's usually very little intersection in these two lists.. \nThe other two could easily be detected and extracted using regular expressions. (1-3 digit numbers are always sizes, etc) \nYour problem domain doesn't seem big enough to warrant a more heavy duty approach such as statistical learning.","Q_Score":8,"Tags":"python,nlp,structured-data","A_Id":3177235,"CreationDate":"2010-07-01T23:48:00.000","Title":"Unstructured Text to Structured Data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The simulation tool I have developed over the past couple of years, is written in C++ and currently has a tcl interpreted front-end. It was written such that it can be run either in an interactive shell, or by passing an input file. Either way, the input file is written in tcl (with many additional simulation-specific commands I have added). This allows for quite powerful input files (e.g.- when running monte-carlo sims, random distributions can be programmed as tcl procedures directly in the input file).\nUnfortunately, I am finding that the tcl interpreter is becoming somewhat limited compared to what more modern interpreted languages have to offer, and its syntax seems a bit arcane. Since the computational engine was written as a library with a c-compatible API, it should be straightforward to write alternative front-ends, and I am thinking of moving to a new interpreter, however I am having a bit of a time choosing (mostly because I don't have significant experience with many interpreted languages). The options I have begun to explore are as follows:\nRemaining with tcl:\nPros:\n - No need to change the existing code.\n - Existing input files stay the same. (though I'd probably keep the tcl front end as an option)\n - Mature language with lots of community support.\nCons:\n - Feeling limited by the language syntax.\n - Getting complaints from users as to the difficulty of learning tcl.\nPython:\nPros:\n - Modern interpreter, known to be quite efficient.\n - Large, active community.\n - Well known scientific and mathematical modules, such as scipy.\n - Commonly used in the academic Scientific\/engineering community (typical users of my code)\nCons:\n - I've never used it and thus would take time to learn the language (this is also a pro, as I've been meaning to learn python for quite some time)\n - Strict formatting of the input files (indentation, etc..)\nMatlab:\nPros:\n - Very power and widely used mathematical tool\n - Powerful built-in visualization\/plotting.\n - Extensible, through community submitted code, as well as commercial toolboxes.\n - Many in science\/engineering academia is familiar with and comfortable with matlab.\nCons:\n - Can not distribute as an executable- would need to be an add-on\/toolbox.\n - Would require (?) the matlab compiler (which is pricy).\n - Requires Matlab, which is also pricy.\nThese pros and cons are what I've been able to come up with, though I have very little experience with interpreted languages in general. I'd love to hear any thoughts on both the interpreters I've proposed here, if these pros\/cons listed are legitimate, and any other interpreters I haven't thought of (e.g.- would php be appropriate for something like this? lua?). First hand experience with embedding an interpreter in your code is definitely a plus!","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":453,"Q_Id":3167661,"Users Score":0,"Answer":"Well, unless there are any other suggestions, the final answer I have arrived at is to go with Python.\nI seriously considered matlab\/octave, but when reading the octave API and matlab API, they are different enough that I'd need to build separate interfaces for each (or get very creative with macros). With python I end up with a single, easier to maintain codebase for the front end, and it is used by just about everyone we know. Thanks for the tips\/feedback everyone!","Q_Score":6,"Tags":"c++,python,matlab,tcl,interpreter","A_Id":3188680,"CreationDate":"2010-07-02T16:58:00.000","Title":"Picking a front-end\/interpreter for a scientific code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The simulation tool I have developed over the past couple of years, is written in C++ and currently has a tcl interpreted front-end. It was written such that it can be run either in an interactive shell, or by passing an input file. Either way, the input file is written in tcl (with many additional simulation-specific commands I have added). This allows for quite powerful input files (e.g.- when running monte-carlo sims, random distributions can be programmed as tcl procedures directly in the input file).\nUnfortunately, I am finding that the tcl interpreter is becoming somewhat limited compared to what more modern interpreted languages have to offer, and its syntax seems a bit arcane. Since the computational engine was written as a library with a c-compatible API, it should be straightforward to write alternative front-ends, and I am thinking of moving to a new interpreter, however I am having a bit of a time choosing (mostly because I don't have significant experience with many interpreted languages). The options I have begun to explore are as follows:\nRemaining with tcl:\nPros:\n - No need to change the existing code.\n - Existing input files stay the same. (though I'd probably keep the tcl front end as an option)\n - Mature language with lots of community support.\nCons:\n - Feeling limited by the language syntax.\n - Getting complaints from users as to the difficulty of learning tcl.\nPython:\nPros:\n - Modern interpreter, known to be quite efficient.\n - Large, active community.\n - Well known scientific and mathematical modules, such as scipy.\n - Commonly used in the academic Scientific\/engineering community (typical users of my code)\nCons:\n - I've never used it and thus would take time to learn the language (this is also a pro, as I've been meaning to learn python for quite some time)\n - Strict formatting of the input files (indentation, etc..)\nMatlab:\nPros:\n - Very power and widely used mathematical tool\n - Powerful built-in visualization\/plotting.\n - Extensible, through community submitted code, as well as commercial toolboxes.\n - Many in science\/engineering academia is familiar with and comfortable with matlab.\nCons:\n - Can not distribute as an executable- would need to be an add-on\/toolbox.\n - Would require (?) the matlab compiler (which is pricy).\n - Requires Matlab, which is also pricy.\nThese pros and cons are what I've been able to come up with, though I have very little experience with interpreted languages in general. I'd love to hear any thoughts on both the interpreters I've proposed here, if these pros\/cons listed are legitimate, and any other interpreters I haven't thought of (e.g.- would php be appropriate for something like this? lua?). First hand experience with embedding an interpreter in your code is definitely a plus!","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":453,"Q_Id":3167661,"Users Score":3,"Answer":"Have you considered using Octave? From what I gather, it is nearly a drop-in replacement for much of matlab. This might allow you to support matlab for those who have it, and a free alternative for those who don't. Since the \"meat\" of your program appears to be written in another language, the performance considerations seem to be not as important as providing an environment that has: plotting and visualization capabilities, is cross-platform, has a big user base, and in a language that nearly everyone in academia and\/or involved with modelling fluid flow probably already knows. Matlab\/Octave can potentially have all of those.","Q_Score":6,"Tags":"c++,python,matlab,tcl,interpreter","A_Id":3168060,"CreationDate":"2010-07-02T16:58:00.000","Title":"Picking a front-end\/interpreter for a scientific code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I find learning new topics comes best with an easy implementation to code to get the idea. This is how I learned genetic algorithms and genetic programming. What would be some good introductory programs to write to get started with machine learning?\nPreferably, let any referenced resources be accessible online so the community can benefit","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":5717,"Q_Id":3176967,"Users Score":1,"Answer":"Decision tree. It is frequently used in classification tasks and has a lot of variants. Tom Mitchell's book is a good reference to implement it.","Q_Score":14,"Tags":"python,computer-science,artificial-intelligence,machine-learning","A_Id":3177037,"CreationDate":"2010-07-05T02:25:00.000","Title":"What is a good first-implementation for learning machine learning?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I find learning new topics comes best with an easy implementation to code to get the idea. This is how I learned genetic algorithms and genetic programming. What would be some good introductory programs to write to get started with machine learning?\nPreferably, let any referenced resources be accessible online so the community can benefit","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":5717,"Q_Id":3176967,"Users Score":1,"Answer":"Neural nets may be the easiest thing to implement first, and they're fairly thoroughly covered throughout literature.","Q_Score":14,"Tags":"python,computer-science,artificial-intelligence,machine-learning","A_Id":3182779,"CreationDate":"2010-07-05T02:25:00.000","Title":"What is a good first-implementation for learning machine learning?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I find learning new topics comes best with an easy implementation to code to get the idea. This is how I learned genetic algorithms and genetic programming. What would be some good introductory programs to write to get started with machine learning?\nPreferably, let any referenced resources be accessible online so the community can benefit","AnswerCount":5,"Available Count":3,"Score":-1.0,"is_accepted":false,"ViewCount":5717,"Q_Id":3176967,"Users Score":-8,"Answer":"There is something called books; are you familiar with those? When I was exploring AI two decades ago, there were many books. I guess now that the internet exists, books are archaic, but you can probably find some in an ancient library.","Q_Score":14,"Tags":"python,computer-science,artificial-intelligence,machine-learning","A_Id":3177077,"CreationDate":"2010-07-05T02:25:00.000","Title":"What is a good first-implementation for learning machine learning?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there anyone that has some ideas on how to implement the AdaBoost (Boostexter) algorithm in python?\nCheers!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4289,"Q_Id":3193756,"Users Score":2,"Answer":"Thanks a million Steve! In fact, your suggestion had some compatibility issues with MacOSX (a particular library was incompatible with the system) BUT it helped me find out a more interesting package : icsi.boost.macosx. I am just denoting that in case any Mac-eter finds it interesting! \nThank you again!\nTim","Q_Score":5,"Tags":"python,machine-learning,adaboost","A_Id":3207845,"CreationDate":"2010-07-07T10:07:00.000","Title":"AdaBoost ML algorithm python implementation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a web-cam in our office kitchenette focused at our coffee maker. The coffee pot is clearly visible. Both the location of the coffee pot and the camera are static. Is it possible to calculate the height of coffee in the pot using image recognition? I've seen image recognition used for quite complex stuff like face-recognition. As compared to those projects, this seems to be a trivial task of measuring the height. \n(That's my best guess and I have no idea of the underlying complexities.)\nHow would I go about this? Would this be considered a very complex job to partake? FYI, I've never done any kind of imaging-related work.","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":1297,"Q_Id":3227843,"Users Score":2,"Answer":"First do thresholding, then segmentation. Then you can more easily detect edges.","Q_Score":25,"Tags":"python,image-processing","A_Id":3227905,"CreationDate":"2010-07-12T10:57:00.000","Title":"Determine height of Coffee in the pot using Python imaging","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a web-cam in our office kitchenette focused at our coffee maker. The coffee pot is clearly visible. Both the location of the coffee pot and the camera are static. Is it possible to calculate the height of coffee in the pot using image recognition? I've seen image recognition used for quite complex stuff like face-recognition. As compared to those projects, this seems to be a trivial task of measuring the height. \n(That's my best guess and I have no idea of the underlying complexities.)\nHow would I go about this? Would this be considered a very complex job to partake? FYI, I've never done any kind of imaging-related work.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1297,"Q_Id":3227843,"Users Score":0,"Answer":"make pictures of the pot with different levels of coffe in it.\ndownsample the image to maybe 4*10 pixels.\nmake the same in a loop for each new live picture.\ncalculate the difference of each pixels value compared to the reference images.\ntake the reference image with the least difference sum and you get the state of your coffe machine.\n\nyou might experiment if a grayscale version or only red or green might give better results.\nif it gives problems with different light settings this aproach is useless. just buy a spotlight for the coffe machine, or lighten up, or darken each picture till the sum of all pixels reaches a reference value.","Q_Score":25,"Tags":"python,image-processing","A_Id":3231205,"CreationDate":"2010-07-12T10:57:00.000","Title":"Determine height of Coffee in the pot using Python imaging","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I have three numpy arrays which store latitude, longitude, and some property value on a grid -- that is, I have LAT(y,x), LON(y,x), and, say temperature T(y,x), for some limits of x and y. The grid isn't necessarily regular -- in fact, it's tripolar.\nI then want to interpolate these property (temperature) values onto a bunch of different lat\/lon points (stored as lat1(t), lon1(t), for about 10,000 t...) which do not fall on the actual grid points. I've tried matplotlib.mlab.griddata, but that takes far too long (it's not really designed for what I'm doing, after all). I've also tried scipy.interpolate.interp2d, but I get a MemoryError (my grids are about 400x400).\nIs there any sort of slick, preferably fast way of doing this? I can't help but think the answer is something obvious... Thanks!!","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":28537,"Q_Id":3242382,"Users Score":1,"Answer":"There's a bunch of options here, which one is best will depend on your data...\nHowever I don't know of an out-of-the-box solution for you\nYou say your input data is from tripolar data. There are three main cases for how this data could be structured.\n\nSampled from a 3d grid in tripolar space, projected back to 2d LAT, LON data.\nSampled from a 2d grid in tripolar space, projected into 2d LAT LON data.\nUnstructured data in tripolar space projected into 2d LAT LON data\n\nThe easiest of these is 2. Instead of interpolating in LAT LON space, \"just\" transform your point back into the source space and interpolate there.\nAnother option that works for 1 and 2 is to search for the cells that maps from tripolar space to cover your sample point. (You can use a BSP or grid type structure to speed up this search) Pick one of the cells, and interpolate inside it.\nFinally there's a heap of unstructured interpolation options .. but they tend to be slow. \nA personal favourite of mine is to use a linear interpolation of the nearest N points, finding those N points can again be done with gridding or a BSP. Another good option is to Delauney triangulate the unstructured points and interpolate on the resulting triangular mesh.\nPersonally if my mesh was case 1, I'd use an unstructured strategy as I'd be worried about having to handle searching through cells with overlapping projections. Choosing the \"right\" cell would be difficult.","Q_Score":27,"Tags":"python,numpy,scipy,interpolation","A_Id":3242538,"CreationDate":"2010-07-13T23:37:00.000","Title":"Interpolation over an irregular grid","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using numpy. I have a matrix with 1 column and N rows and I want to get an array from with N elements.\nFor example, if i have M = matrix([[1], [2], [3], [4]]), I want to get A = array([1,2,3,4]).\nTo achieve it, I use A = np.array(M.T)[0]. Does anyone know a more elegant way to get the same result?\nThanks!","AnswerCount":10,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":319127,"Q_Id":3337301,"Users Score":2,"Answer":"First, Mv = numpy.asarray(M.T), which gives you a 4x1 but 2D array.\nThen, perform A = Mv[0,:], which gives you what you want. You could put them together, as numpy.asarray(M.T)[0,:].","Q_Score":184,"Tags":"python,arrays,matrix,numpy","A_Id":32098823,"CreationDate":"2010-07-26T17:25:00.000","Title":"Numpy matrix to array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a Python application that involves running regression analysis on live data, and charting both. That is, the application gets fed with live data, and the regression models re-calculates as the data updates. Please note that I want to plot both the input (the data) and output (the regression analysis) in the same one chart.\nI have previously done some work with Matplotlib. Is that the best framework for this? It seems to be fairly static, I can't find any good examples similar to mine above. It also seems pretty bloated to me. Performance is key, so if there is any fast python charting framework out there with a small footprint, I'm all ears...","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":5782,"Q_Id":3351963,"Users Score":1,"Answer":"I havent worked with Matplotlib but I've always found gnuplot to be adequate for all my charting needs.\nYou have the option of calling gnuplot from python or using gnuplot.py \n(gnuplot-py.sourceforge.net) to interface to gnuplot.","Q_Score":6,"Tags":"python,live,charts","A_Id":3352172,"CreationDate":"2010-07-28T10:28:00.000","Title":"Good framework for live charting in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a C# application that needs to be run several thousand times. Currently it precomputes a large table of constant values at the start of the run for reference. As these values will be the same from run to run I would like to compute them independently in a simple python script and then just have the C# app import the file at the start of each run.\nThe table consists of a sorted 2D array (500-3000+ rows\/columns) of simple (int x, double y) tuples. I am looking for recommendations concerning the best\/simplest way to store and then import this data. For example, I could store the data in a text file like this \"(x1,y1)|(x2,y2)|(x3,y3)|...|(xn,yn)\" This seems like a very ugly solution to a problem that seems to lend itself to a specific data structure or library I am currently unaware of. Any suggestions would be welcome.","AnswerCount":6,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1046,"Q_Id":3355832,"Users Score":2,"Answer":"You may consider running IronPython - then you can pass values back and forth across C#\/Python","Q_Score":4,"Tags":"c#,python,file","A_Id":3355927,"CreationDate":"2010-07-28T17:52:00.000","Title":"Suggestions for passing large table between Python and C#","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a C# application that needs to be run several thousand times. Currently it precomputes a large table of constant values at the start of the run for reference. As these values will be the same from run to run I would like to compute them independently in a simple python script and then just have the C# app import the file at the start of each run.\nThe table consists of a sorted 2D array (500-3000+ rows\/columns) of simple (int x, double y) tuples. I am looking for recommendations concerning the best\/simplest way to store and then import this data. For example, I could store the data in a text file like this \"(x1,y1)|(x2,y2)|(x3,y3)|...|(xn,yn)\" This seems like a very ugly solution to a problem that seems to lend itself to a specific data structure or library I am currently unaware of. Any suggestions would be welcome.","AnswerCount":6,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":1046,"Q_Id":3355832,"Users Score":1,"Answer":"CSV is fine suggestion, but may be clumsy with values being int and double. Generally tab or semicomma are best separators.","Q_Score":4,"Tags":"c#,python,file","A_Id":3356036,"CreationDate":"2010-07-28T17:52:00.000","Title":"Suggestions for passing large table between Python and C#","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to load a CCITT T.3 compressed tiff into python, and get the pixel matrix from it. It should just be a logical matrix. \nI have tried using pylibtiff and PIL, but when I load it with them, the matrix it returns is empty. I have read in a lot of places that these two tools support loading CCITT but not accessing the pixels. \nI am open to converting the image, as long as I can get the logical matrix from it and do it in python code. The crazy thing is is that if I open one of my images in paint, save it without altering it, then try to load it with pylibtiff, it works. Paint re-compresses it to the LZW compression. \nSo I guess my real question is: Is there a way to either natively load CCITT images to matricies or convert the images to LZW using python?? \nThanks,\ntylerthemiler","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1297,"Q_Id":3355962,"Users Score":0,"Answer":"How about running tiffcp with subprocess to convert to LZW (-c lzw switch), then process normally with pylibtiff? There are Windows builds of tiffcp lying around on the web. Not exactly Python-native solution, but still...","Q_Score":2,"Tags":"python,compression,tiff,imaging,image-formats","A_Id":3357139,"CreationDate":"2010-07-28T18:06:00.000","Title":"What is the best way to load a CCITT T.3 compressed tiff using python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a simple application in Python which displays images.I need to implement Zoom In and Zoom Out by scaling the image.\nI think the Image.transform method will be able to do this, but I'm not sure how to use it, since it's asking for an affine matrix or something like that :P\nHere's the quote from the docs:\n\nim.transform(size, AFFINE, data, filter) => image\nApplies an affine transform to the image, and places the result in a new image with the given size.\nData is a 6-tuple (a, b, c, d, e, f) which contain the first two rows from an affine transform matrix. For each pixel (x, y) in the output image, the new value is taken from a position (a x + b y + c, d x + e y + f) in the input image, rounded to nearest pixel.\nThis function can be used to scale, translate, rotate, and shear the original image.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":7941,"Q_Id":3368740,"Users Score":6,"Answer":"You would be much better off using the EXTENT rather than the AFFINE method. You only need to calculate two things: what part of the input you want to see, and how large it should be. For example, if you want to see the whole image scaled down to half size (i.e. zooming out by 2), you'd pass the data (0, 0, im.size[0], im.size[1]) and the size (im.size[0]\/2, im.size[1]\/2).","Q_Score":7,"Tags":"python,image-processing,python-imaging-library","A_Id":3368847,"CreationDate":"2010-07-30T04:43:00.000","Title":"Zooming With Python Image Library","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a program that contains a large number of objects, many of them Numpy arrays. My program is swapping miserably, and I'm trying to reduce the memory usage, because it actually can't finis on my system with the current memory requirements.\nI am looking for a nice profiler that would allow me to check the amount of memory consumed by various objects (I'm envisioning a memory counterpart to cProfile) so that I know where to optimize.\nI've heard decent things about Heapy, but Heapy unfortunately does not support Numpy arrays, and most of my program involves Numpy arrays.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4821,"Q_Id":3372444,"Users Score":0,"Answer":"Can you just save\/pickle some of the arrays to disk in tmp files when not using them? That's what I've had to do in the past with large arrays. Of course this will slow the program down, but at least it'll finish. Unless you need them all at once?","Q_Score":27,"Tags":"python,numpy,memory-management,profile","A_Id":3374079,"CreationDate":"2010-07-30T14:30:00.000","Title":"Profile Memory Allocation in Python (with support for Numpy arrays)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use R in Python, as provided by the module Rpy2. I notice that R has very convenient [] operations by which you can extract the specific columns or lines. How could I achieve such a function by Python scripts?\nMy idea is to create an R vector and add those wanted elements into this vector so that the final vector is the same as that in R. I created a seq(), but it seems that it has an initial digit 1, so the final result would always start with the digit 1, which is not what I want. So, is there a better way to do this?","AnswerCount":7,"Available Count":1,"Score":0.1137907297,"is_accepted":false,"ViewCount":291651,"Q_Id":3413879,"Users Score":4,"Answer":"As pointed out by Brani, vector() is a solution, e.g.\nnewVector <- vector(mode = \"numeric\", length = 50)\nwill return a vector named \"newVector\" with 50 \"0\"'s as initial values. It is also fairly common to just add the new scalar to an existing vector to arrive at an expanded vector, e.g.\naVector <- c(aVector, newScalar)","Q_Score":104,"Tags":"python,r,vector,rpy2","A_Id":20171517,"CreationDate":"2010-08-05T10:39:00.000","Title":"How to create an empty R vector to add new items","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I saved some plots from matplotlib into a pdf format because it seems to offer a better quality. How do I include the PDF image into a PDF document using ReportLab? The convenience method Image(filepath) does not work for this format.\nThank you.","AnswerCount":4,"Available Count":1,"Score":-1.0,"is_accepted":false,"ViewCount":7751,"Q_Id":3448365,"Users Score":-5,"Answer":"Use from reportlab.graphics import renderPDF","Q_Score":12,"Tags":"python,image,pdf,reportlab","A_Id":4092003,"CreationDate":"2010-08-10T11:23:00.000","Title":"PDF image in PDF document using ReportLab (Python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to do some image processing using Python.\nIs there a simple way to import .png image as a matrix of greyscale\/RGB values (possibly using PIL)?","AnswerCount":6,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":77563,"Q_Id":3493092,"Users Score":33,"Answer":"scipy.misc.imread() will return a Numpy array, which is handy for lots of things.","Q_Score":31,"Tags":"python,image-processing,numpy,python-imaging-library","A_Id":3494982,"CreationDate":"2010-08-16T12:29:00.000","Title":"Convert image to a matrix in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to do some image processing using Python.\nIs there a simple way to import .png image as a matrix of greyscale\/RGB values (possibly using PIL)?","AnswerCount":6,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":77563,"Q_Id":3493092,"Users Score":7,"Answer":"scipy.misc.imread() is deprecated now. We can use imageio.imread instead of that to read it as a Numpy array","Q_Score":31,"Tags":"python,image-processing,numpy,python-imaging-library","A_Id":50263426,"CreationDate":"2010-08-16T12:29:00.000","Title":"Convert image to a matrix in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"(Couldn't upload the picture showing the integral as I'm a new user.)","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":571,"Q_Id":3520672,"Users Score":1,"Answer":"Yes. Those integrals (I'll assume they're area integrals over a region in 2D space) can be calculated using an appropriate quadrature rule.\nYou can also use Green's theorem to convert them into contour integrals and use Gaussian quadrature to integrate along the path.","Q_Score":1,"Tags":"python,scipy","A_Id":3520715,"CreationDate":"2010-08-19T10:07:00.000","Title":"Can scipy calculate (double) integrals with complex-valued integrands (real and imaginary parts in integrand)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"(Couldn't upload the picture showing the integral as I'm a new user.)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":571,"Q_Id":3520672,"Users Score":0,"Answer":"Thanks duffymo!\nI am calculating Huygens-Fresnel diffraction integrals: plane and other wave diffraction through circular (2D) apertures in polar coordinates.\nAs far as the programming goes: Currently a lot of my code is in Mathematica. I am considering changing to one of: scipy, java + flanagan math library, java + apache commons math library, gnu scientific library, or octave.\nMy first candidate for evaluation is scipy, but if it cannot handle complex-valued integrands, then I have to change my plans for the weekend...","Q_Score":1,"Tags":"python,scipy","A_Id":3526740,"CreationDate":"2010-08-19T10:07:00.000","Title":"Can scipy calculate (double) integrals with complex-valued integrands (real and imaginary parts in integrand)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've never seen this error before, and none of the hits on Google seem to apply. I've got a very large NumPy array that holds Boolean values. When I try writing the array using numpy.dump(), I get the following error:\nSystemError: NULL result without error in PyObject_Call\nThe array is initialized with all False values, and the only time I ever access it is to set some of the values to True, so I have no idea why any of the values would be null.\nWhen I try running the same program with a smaller array, I get no error. However, since the error occurs at the writing step, I don't think that it's a memory issue. Has anybody else seen this error before?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":3309,"Q_Id":3576430,"Users Score":1,"Answer":"It appears that this may have been an error from using the 32-bit version of NumPy and not the 64 bit. For whatever reason, though the program has no problem keeping the array in memory, it trips up when writing the array to a file if the number of elements in the array is greater than 2^32.","Q_Score":0,"Tags":"python,numpy","A_Id":3605012,"CreationDate":"2010-08-26T15:00:00.000","Title":"Python\/Numpy error: NULL result without error in PyObject_Call","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've never seen this error before, and none of the hits on Google seem to apply. I've got a very large NumPy array that holds Boolean values. When I try writing the array using numpy.dump(), I get the following error:\nSystemError: NULL result without error in PyObject_Call\nThe array is initialized with all False values, and the only time I ever access it is to set some of the values to True, so I have no idea why any of the values would be null.\nWhen I try running the same program with a smaller array, I get no error. However, since the error occurs at the writing step, I don't think that it's a memory issue. Has anybody else seen this error before?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":3309,"Q_Id":3576430,"Users Score":1,"Answer":"That message comes directly from the CPython interpreter (see abstract.c method PyObject_Call). You may get a better response on a Python or NumPy mailing list regarding that error message because it looks like a problem in C code. \nWrite a simple example to demonstrating the problem and you should be able to narrow the issue down to a module then a method.","Q_Score":0,"Tags":"python,numpy","A_Id":3576712,"CreationDate":"2010-08-26T15:00:00.000","Title":"Python\/Numpy error: NULL result without error in PyObject_Call","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to import SPSS dataset into Python, preferably NumPy recarray format?\nI have looked around but could not find any answer.\nJoon","AnswerCount":7,"Available Count":3,"Score":0.0855049882,"is_accepted":false,"ViewCount":9605,"Q_Id":3639639,"Users Score":3,"Answer":"Option 1\nAs rkbarney pointed out, there is the Python savReaderWriter available via pypi. I've run into two issues:\n\nIt relies on a lot of extra libraries beyond the seemingly pure-python implementation. SPSS files are read and written in nearly every case by the IBM provided SPSS I\/O modules. These modules differ by platform and in my experience \"pip install savReaderWriter\" doesn't get them running out of the box (on OS X).\nDevelopment on savReaderWriter is, while not dead, less up-to-date than one might hope. This complicates the first issue. It relies on some deprecated packages to increase speed and gives some warnings any time you import savReaderWriter if they're not available. Not a huge issue today but it could be trouble in the future as IBM continues to update the SPSS I\/O modules to deal new SPSS formats (they're on version 21 or 22 already if memory serves).\n\nOption 2\nI've chosen to use R as a middle-man. Using rpy2, I set up a simple function to read the file into an R data frame and output it again as a CSV file which I subsequently import into python. It's a bit rube-goldberg but it works. Of course, this requires R which may also be a hassle to install in your environment (and has different binaries for different platforms).","Q_Score":6,"Tags":"python,import,dataset,spss","A_Id":21390123,"CreationDate":"2010-09-03T21:18:00.000","Title":"Importing SPSS dataset into Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to import SPSS dataset into Python, preferably NumPy recarray format?\nI have looked around but could not find any answer.\nJoon","AnswerCount":7,"Available Count":3,"Score":0.0285636566,"is_accepted":false,"ViewCount":9605,"Q_Id":3639639,"Users Score":1,"Answer":"To be clear, the SPSS ODBC driver does not require an SPSS installation.","Q_Score":6,"Tags":"python,import,dataset,spss","A_Id":3691267,"CreationDate":"2010-09-03T21:18:00.000","Title":"Importing SPSS dataset into Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to import SPSS dataset into Python, preferably NumPy recarray format?\nI have looked around but could not find any answer.\nJoon","AnswerCount":7,"Available Count":3,"Score":0.0855049882,"is_accepted":false,"ViewCount":9605,"Q_Id":3639639,"Users Score":3,"Answer":"SPSS has an extensive integration with Python, but that is meant to be used with SPSS (now known as IBM SPSS Statistics). There is an SPSS ODBC driver that could be used with Python ODBC support to read a sav file.","Q_Score":6,"Tags":"python,import,dataset,spss","A_Id":3640019,"CreationDate":"2010-09-03T21:18:00.000","Title":"Importing SPSS dataset into Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a function defined by a combination of basic math functions (abs, cosh, sinh, exp, ...).\nI was wondering if it makes a difference (in speed) to use, for example,\nnumpy.abs() instead of abs()?","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":39385,"Q_Id":3650194,"Users Score":25,"Answer":"You should use numpy function to deal with numpy's types and use regular python function to deal with regular python types.\nWorst performance usually occurs when mixing python builtins with numpy, because of types conversion. Those type conversion have been optimized lately, but it's still often better to not use them. Of course, your mileage may vary, so use profiling tools to figure out.\nAlso consider the use of programs like cython or making a C module if you want to optimize further your program. Or consider not to use python when performances matters.\nbut, when your data has been put into a numpy array, then numpy can be really fast at computing bunch of data.","Q_Score":67,"Tags":"python,performance,numpy","A_Id":3650761,"CreationDate":"2010-09-06T09:04:00.000","Title":"Are NumPy's math functions faster than Python's?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on an app that takes in webcam data, applies various transformations, blurs and then does a background subtraction and threshold filter. It's a type of optical touch screen retrofitting system (the design is so different that tbeta\/touchlib can't be used).\nThe camera's white balance is screwing up the threshold filter by brightening everything whenever a user's hand is seen and darkening when it leaves, causing one of those to exhibit immense quantities of static.\nIs there a good way to counteract it? Is taking a corner, assuming it's constant and adjusting the rest of the image's brightness so that it stays constant a good idea?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":4888,"Q_Id":3680829,"Users Score":1,"Answer":"You could try interfacing your camera through DirectShow and turn off Auto White Balance through your code or you could try first with the camera software deployed with it. It often gives you ability to do certain modifications as white balance and similar stuff.","Q_Score":4,"Tags":"python,opencv,webcam,touchscreen,background-subtraction","A_Id":3686359,"CreationDate":"2010-09-09T21:49:00.000","Title":"Compensate for Auto White Balance with OpenCV","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project on Info Retrieval.\nI have made a Full Inverted Index using Hadoop\/Python. \nHadoop outputs the index as (word,documentlist) pairs which are written on the file.\nFor a quick access, I have created a dictionary(hashtable) using the above file.\nMy question is, how do I store such an index on disk that also has quick access time.\nAt present I am storing the dictionary using python pickle module and loading from it\nbut it brings the whole of index into memory at once (or does it?). \nPlease suggest an efficient way of storing and searching through the index.\nMy dictionary structure is as follows (using nested dictionaries)\n{word : {doc1:[locations], doc2:[locations], ....}}\nso that I can get the documents containing a word by\ndictionary[word].keys() ... and so on.","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3666,"Q_Id":3687715,"Users Score":0,"Answer":"You could store the repr() of the dictionary and use that to re-create it.","Q_Score":5,"Tags":"python,information-retrieval,inverted-index","A_Id":3688556,"CreationDate":"2010-09-10T19:29:00.000","Title":"Storing an inverted index","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project on Info Retrieval.\nI have made a Full Inverted Index using Hadoop\/Python. \nHadoop outputs the index as (word,documentlist) pairs which are written on the file.\nFor a quick access, I have created a dictionary(hashtable) using the above file.\nMy question is, how do I store such an index on disk that also has quick access time.\nAt present I am storing the dictionary using python pickle module and loading from it\nbut it brings the whole of index into memory at once (or does it?). \nPlease suggest an efficient way of storing and searching through the index.\nMy dictionary structure is as follows (using nested dictionaries)\n{word : {doc1:[locations], doc2:[locations], ....}}\nso that I can get the documents containing a word by\ndictionary[word].keys() ... and so on.","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3666,"Q_Id":3687715,"Users Score":0,"Answer":"I am using anydmb for that purpose. Anydbm provides the same dictionary-like interface, except it allow only strings as keys and values. But this is not a constraint since you can use cPickle's loads\/dumps to store more complex structures in the index.","Q_Score":5,"Tags":"python,information-retrieval,inverted-index","A_Id":5341353,"CreationDate":"2010-09-10T19:29:00.000","Title":"Storing an inverted index","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I have a file that consists of too many columns to open in excel. Each column has 10 rows of numerical values 0-2 and has a row saying the title of the column. I would like the output to be the name of the column and the average value of the 10 rows. The file is too large to open in excel 2000 so I have to try using python. Any tips on an easy way to do this.\nHere is a sample of the first 3 columns:\nTrial1 Trial2 Trial3\n1 0 1\n0 0 0\n0 2 0\n2 2 2\n1 1 1\n1 0 1\n0 0 0\n0 2 0\n2 2 2\n1 1 1\nI want python to output as a test file\nTrial 1 Trial 2 Trial 3\n1 2 1 (whatever the averages are)","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5272,"Q_Id":3692996,"Users Score":0,"Answer":"Less of an answer than it is an alternative understanding of the problem:\nYou could think of each line being a vector. In this way, the average done column-by-column is just the average of each of these vectors. All you need in order to do this is\n\nA way to read a line into a vector object,\nA vector addition operation,\nScalar multiplication (or division) of vectors.\n\nPython comes (I think) with most of this already installed, but this should lead to some easily readable code.","Q_Score":2,"Tags":"python","A_Id":19403571,"CreationDate":"2010-09-11T22:55:00.000","Title":"How to find the average of multiple columns in a file using python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Bayesian Classifier programmed in Python, the problem is that when I multiply the features probabilities I get VERY small float values like 2.5e-320 or something like that, and suddenly it turns into 0.0. The 0.0 is obviously of no use to me since I must find the \"best\" class based on which class returns the MAX value (greater value).\nWhat would be the best way to deal with this? I thought about finding the exponential portion of the number (-320) and, if it goes too low, multiplying the value by 1e20 or some value like that. But maybe there is a better way?","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":17743,"Q_Id":3704570,"Users Score":20,"Answer":"Would it be possible to do your work in a logarithmic space? (For example, instead of storing 1e-320, just store -320, and use addition instead of multiplication)","Q_Score":24,"Tags":"python,floating-point,numerical-stability","A_Id":3704637,"CreationDate":"2010-09-13T21:29:00.000","Title":"In Python small floats tending to zero","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing an application where quite a bit of the computational time will be devoted to performing basic linear algebra operations (add, multiply, multiply by vector, multiply by scalar, etc.) on sparse matrices and vectors. Up to this point, we've built a prototype using C++ and the Boost matrix library. \nI'm considering switching to Python, to ease of coding the application itself, since it seems the Boost library (the easy C++ linear algebra library) isn't particularly fast anyway. This is a research\/proof of concept application, so some reduction of run time speed is acceptable (as I assume C++ will almost always outperform Python) so long as coding time is also significantly decreased.\nBasically, I'm looking for general advice from people who have used these libraries before. But specifically:\n1) I've found scipy.sparse and and pySparse. Are these (or other libraries) recommended?\n2) What libraries beyond Boost are recommended for C++? I've seen a variety of libraries with C interfaces, but again I'm looking to do something with low complexity, if I can get relatively good performance.\n3) Ultimately, will Python be somewhat comparable to C++ in terms of run time speed for the linear algebra operations? I will need to do many, many linear algebra operations and if the slowdown is significant then I probably shouldn't even try to make this switch.\nThank you in advance for any help and previous experience you can relate.","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":3064,"Q_Id":3761994,"Users Score":2,"Answer":"I don't have directly applicable experience, but the scipy\/numpy operations are almost all implemented in C. As long as most of what you need to do is expressed in terms of scipy\/numpy functions, then your code shouldn't be much slower than equivalent C\/C++.","Q_Score":4,"Tags":"c++,python,linear-algebra","A_Id":3762217,"CreationDate":"2010-09-21T15:45:00.000","Title":"Python vs. C++ for an application that does sparse linear algebra","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing an application where quite a bit of the computational time will be devoted to performing basic linear algebra operations (add, multiply, multiply by vector, multiply by scalar, etc.) on sparse matrices and vectors. Up to this point, we've built a prototype using C++ and the Boost matrix library. \nI'm considering switching to Python, to ease of coding the application itself, since it seems the Boost library (the easy C++ linear algebra library) isn't particularly fast anyway. This is a research\/proof of concept application, so some reduction of run time speed is acceptable (as I assume C++ will almost always outperform Python) so long as coding time is also significantly decreased.\nBasically, I'm looking for general advice from people who have used these libraries before. But specifically:\n1) I've found scipy.sparse and and pySparse. Are these (or other libraries) recommended?\n2) What libraries beyond Boost are recommended for C++? I've seen a variety of libraries with C interfaces, but again I'm looking to do something with low complexity, if I can get relatively good performance.\n3) Ultimately, will Python be somewhat comparable to C++ in terms of run time speed for the linear algebra operations? I will need to do many, many linear algebra operations and if the slowdown is significant then I probably shouldn't even try to make this switch.\nThank you in advance for any help and previous experience you can relate.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":3064,"Q_Id":3761994,"Users Score":1,"Answer":"Speed nowdays its no longer an issue for python since ctypes and cython emerged. Whats brilliant about cython is that your write python code and it generates c code without requiring from you to know a single line of c and then compiles to a library or you could even create a stanalone. Ctypes also is similar though abit slower. From the tests I have conducted cython code is as fast as c code and that make sense since cython code is translated to c code. Ctypes is abit slower.\nSo in the end its a question of profiling , see what is slow in python and move it to cython, or you could wrap your existing c libraries for python with cython. Its quite easy to achieve c speeds this way. \nSo I will recommend not to waste the effort you invested creating these c libraries , wrap them with cython and do the rest with python. Or you could do all of it with cython if you wish as cython is python bar some limitations. And even allows you to mix c code as well. So you could do part of it in c and part of it python\/cython. Depending what makes you feel more comfortable.\nNumpy ans SciPy could be used as well for saving more time and providing ready to use solutions to your problems \/ needs.You should certainly check them out. Numpy has even has weaver a tool that let you inline c code inside your python code, just like you can inline assembly code inside your c code. But i think you would prefer to use cython . Remember because cython is both c and python at the same time it allows you to use directly c and python libraries.","Q_Score":4,"Tags":"c++,python,linear-algebra","A_Id":3762759,"CreationDate":"2010-09-21T15:45:00.000","Title":"Python vs. C++ for an application that does sparse linear algebra","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Which one among the two languages is good for statistical analysis? What are the pros and cons, other than accessibility, for each?","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":1096,"Q_Id":3792465,"Users Score":2,"Answer":"SciPy, NumPy and Matplotlib.","Q_Score":8,"Tags":"python,matlab,statistics,analysis","A_Id":3792494,"CreationDate":"2010-09-25T04:12:00.000","Title":"Among MATLAB and Python, which one is good for statistical analysis?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Which one among the two languages is good for statistical analysis? What are the pros and cons, other than accessibility, for each?","AnswerCount":5,"Available Count":2,"Score":0.1194272985,"is_accepted":false,"ViewCount":1096,"Q_Id":3792465,"Users Score":3,"Answer":"I would pick Python because it can be a powerful as Matlab but is free. Also, you can distribute your applications for free and no licensing chains. \nMatlab is awesome and expensive (it had a great statistical package) and it will glow smoother than Python in the beginning, but not so in the long run. \nNow, if you really want the best solution then check out R, the statistical package which is de facto in the community. They even have a Python port for it. R is also free software.","Q_Score":8,"Tags":"python,matlab,statistics,analysis","A_Id":3792582,"CreationDate":"2010-09-25T04:12:00.000","Title":"Among MATLAB and Python, which one is good for statistical analysis?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been researching the similarities\/differences between Ruby and Python generators (known as Enumerators in Ruby), and so far as i can tell they're pretty much equivalent. \nHowever one difference i've noticed is that Python Generators support a close() method whereas Ruby Generators do not. From the Python docs the close() method is said to do the following:\n\nRaises a GeneratorExit at the point where the generator function was paused. If the generator function then raises StopIteration (by exiting normally, or due to already being closed) or GeneratorExit (by not catching the exception), close returns to its caller.\"\n\nIs there a good reason why Ruby Enumerators don't support the close() method? Or is it an accidental \nomission?\nI also discovered that Ruby Enumerators support a rewind() method yet Python generators do not...is there a reason for this too?\nThanks","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":4732,"Q_Id":3794762,"Users Score":2,"Answer":"Generators are stack based, Ruby's Enumerators are often specialised (at the interpreter level) and not stack based.","Q_Score":20,"Tags":"python,ruby,generator,enumerator","A_Id":3796237,"CreationDate":"2010-09-25T17:25:00.000","Title":"Ruby generators vs Python generators","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In using the numpy.darray, I met a memory overflow problem due to the size of data\uff0cfor example:\nSuppose I have a 100000000 * 100000000 * 100000000 float64 array data source, when I want to read data and process it in memory with np. It will raise a Memoray Error because it works out all memory for storing such a big array in memory.\nThen maybe using a disk file \/ database as a buffer to store the array is a solution, when I want to use data, it will get the necessary data from the file \/ database, otherwise, it is just a python object take few memory.\nIs it possible write such a adapter?\nThanks.\nRgs,\nKC","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":337,"Q_Id":3818881,"Users Score":0,"Answer":"If You have matrices with lots of zeros use scipy.sparse.csc_matrix. \nIt's possible to write everything, for example You can override numarray array class.","Q_Score":1,"Tags":"python,memory-management,numpy","A_Id":3819063,"CreationDate":"2010-09-29T05:10:00.000","Title":"using file\/db as the buffer for very big numpy array to yield data prevent overflow?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have input values of x, y, z coordinates in the following format:\n[-11.235865 5.866001 -4.604924]\n[-11.262565 5.414276 -4.842384]\n[-11.291885 5.418229 -4.849229]\n[-11.235865 5.866001 -4.604924]\nI want to draw polygons and succeeded with making a list of wx.point objects. But I need to plot floating point coordinates so I had to change it to point2D objects but DrawPolygon doesn't seem to understand floating points, which returns error message: TypeError: Expected a sequence of length-2 sequences or wxPoints.\nI can't find anywhere in the API that can draw shapes based on point2D coordinates, could anyone tell me a function name will do the job? \nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1320,"Q_Id":3852146,"Users Score":0,"Answer":"DC's only use integers. Try using Cairo or wx.GraphicsContext.","Q_Score":1,"Tags":"python,wxpython","A_Id":3855400,"CreationDate":"2010-10-04T00:29:00.000","Title":"How to draw polygons with Point2D in wxPython?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the fastest way to sort an array of whole integers bigger than 0 and less than 100000 in Python? But not using the built in functions like sort.\nIm looking at the possibility to combine 2 sport functions depending on input size.","AnswerCount":10,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":57933,"Q_Id":3855537,"Users Score":4,"Answer":"Radix sort theoretically runs in linear time (sort time grows roughly in direct proportion to array size ), but in practice Quicksort is probably more suited, unless you're sorting absolutely massive arrays.\nIf you want to make quicksort a bit faster, you can use insertion sort] when the array size becomes small.\nIt would probably be helpful to understand the concepts of algorithmic complexity and Big-O notation too.","Q_Score":20,"Tags":"python,arrays,performance,sorting","A_Id":3859736,"CreationDate":"2010-10-04T13:12:00.000","Title":"Fastest way to sort in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a numpy array in Python and I'd like to display it on-screen as a raster image. What is the simplest way to do this? It doesn't need to be particularly fancy or have a nice interface, all I need to do is to display the contents of the array as a greyscale raster image.\nI'm trying to transition some of my IDL code to Python with NumPy and am basically looking for a replacement for the tv and tvscl commands in IDL.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":41992,"Q_Id":3886281,"Users Score":3,"Answer":"Quick addition: for displaying with matplotlib, if you want the image to appear \"raster\", i.e., pixelized without smoothing, then you should include the option interpolation='nearest' in the call to imshow.","Q_Score":23,"Tags":"python,image,image-processing,numpy","A_Id":25062330,"CreationDate":"2010-10-07T22:19:00.000","Title":"Display array as raster image in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing some modelling routines in NumPy that need to select cells randomly from a NumPy array and do some processing on them. All cells must be selected without replacement (as in, once a cell has been selected it can't be selected again, but all cells must be selected by the end).\nI'm transitioning from IDL where I can find a nice way to do this, but I assume that NumPy has a nice way to do this too. What would you suggest?\nUpdate: I should have stated that I'm trying to do this on 2D arrays, and therefore get a set of 2D indices back.","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":15426,"Q_Id":3891180,"Users Score":1,"Answer":"people using numpy version 1.7 or later there can also use the builtin function numpy.random.choice","Q_Score":13,"Tags":"python,random,numpy,shuffle,sampling","A_Id":18177268,"CreationDate":"2010-10-08T13:48:00.000","Title":"Select cells randomly from NumPy array - without replacement","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been tasked to find N clusters containing the most points for a certain data set given that the clusters are bounded by a certain size. Currently, I am attempting to do this by plugging in my data into a kd-tree, iterating over the data and finding its nearest neighbor, and then merging the points if the cluster they make does not exceed a limit. I'm not sure this approach will give me a global solution so I'm looking for ways to tweak it. If you can tell me what type of problem this would go under, that'd be great too.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1423,"Q_Id":3891645,"Users Score":0,"Answer":"If your number of clusters is fixed and you only want to maximize the number of points that are in these clusters then I think a greedy solution would be good : \n\nfind the rectangle that can contains the maximum number of points,\nremove these points,\nfind the next rectangle\n...\n\nSo how to find the rectangle of maximum area A (in fact each rectangle will have this area) that contains the maximum number of points ?\nA rectangle is not really common for euclidean distance, before trying to solve this, could you precise if you really need rectangle or just some king of limit on the cluster size ? Would a circle\/ellipse work ?\nEDIT : \ngreedy will not work (see comment below) and it really need to be rectangles...","Q_Score":4,"Tags":"python,algorithm,cluster-analysis,classification,nearest-neighbor","A_Id":3893461,"CreationDate":"2010-10-08T14:40:00.000","Title":"Clustering problem","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I build numpy 1.5 on ubuntu 10.10?\nThe instructions I found seems outdated or not clear.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1563,"Q_Id":3933923,"Users Score":2,"Answer":"One way to try, which isn't guaranteed to work, but worth a shot is to see if uupdate can sucessfully update the package. Get a tarball of numpy 1.5. run \"apt-get source numpy\" which should fetch and unpack the current source from ubuntu. cd into this source directory and run \"uupdate ..\/numpytarballname\". This should update the old source package using the newer tarball. then you can try building with \"apt-get build-dep numpy\" and \"dpkg-buildpackage -rfakeroot\". This will require you have the build-essential and fakeroot packages installed.","Q_Score":0,"Tags":"python,ubuntu,numpy","A_Id":3934387,"CreationDate":"2010-10-14T13:56:00.000","Title":"build recent numpy on recent ubuntu","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Some coworkers who have been struggling with Stata 11 are asking for my help to try to automate their laborious work. They mainly use 3 commands in Stata:\n\ntsset (sets a time series analysis)\n\nas in: tsset year_column, yearly\n\nvarsoc (Obtain lag-order selection statistics for VARs)\n\nas in: varsoc column_a column_b\n\nvec (vector error-correction model)\n\nas in: vec column_a column_b, trend(con) lags(1) noetable\n\nDoes anyone know any scientific library that I can use through python for this same functionality?","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":1936,"Q_Id":3946219,"Users Score":2,"Answer":"Use Rpy2 and call the R var package.","Q_Score":9,"Tags":"python,statistics,numpy,stata","A_Id":4572785,"CreationDate":"2010-10-15T21:16:00.000","Title":"Migrating from Stata to Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Some coworkers who have been struggling with Stata 11 are asking for my help to try to automate their laborious work. They mainly use 3 commands in Stata:\n\ntsset (sets a time series analysis)\n\nas in: tsset year_column, yearly\n\nvarsoc (Obtain lag-order selection statistics for VARs)\n\nas in: varsoc column_a column_b\n\nvec (vector error-correction model)\n\nas in: vec column_a column_b, trend(con) lags(1) noetable\n\nDoes anyone know any scientific library that I can use through python for this same functionality?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1936,"Q_Id":3946219,"Users Score":0,"Answer":"I have absolutely no clue what any of those do, but NumPy and SciPy. Maybe Sage or SymPy.","Q_Score":9,"Tags":"python,statistics,numpy,stata","A_Id":3946648,"CreationDate":"2010-10-15T21:16:00.000","Title":"Migrating from Stata to Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to know how I would go about performing image analysis in R. My goal is to convert images into matrices (pixel-wise information), extract\/quantify color, estimate the presence of shapes and compare images based on such metrics\/patterns.\nI am aware of relevant packages available in Python (suggestions relevant to Python are also welcome), but I am looking to accomplish these tasks in R.\nThank you for your feedback.\n-Harsh","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":4953,"Q_Id":3955077,"Users Score":1,"Answer":"Try the rgdal package. You will be able to read (import) and write (export) GeoTiff image files from\/to R.\nMarcio Pupin Mello","Q_Score":15,"Tags":"python,image,r,analysis","A_Id":3964945,"CreationDate":"2010-10-17T20:12:00.000","Title":"Image analysis in R","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"And what is it called? I don't know how to search for it; I tried calling it ellipsis with the Google. I don't mean in interactive output when dots are used to indicate that the full array is not being shown, but as in the code I'm looking at, \nxTensor0[...] = xVTensor[..., 0]\nFrom my experimentation, it appears to function the similarly to : in indexing, but stands in for multiple :'s, making x[:,:,1] equivalent to x[...,1].","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1449,"Q_Id":3993125,"Users Score":7,"Answer":"Yes, you're right. It fills in as many : as required. The only difference occurs when you use multiple ellipses. In that case, the first ellipsis acts in the same way, but each remaining one is converted to a single :.","Q_Score":7,"Tags":"python,numpy","A_Id":3993156,"CreationDate":"2010-10-22T00:50:00.000","Title":"What does ... mean in numpy code?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I generate random integers between 0 and 9 (inclusive) in Python?\nFor example, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9","AnswerCount":22,"Available Count":1,"Score":0.0272659675,"is_accepted":false,"ViewCount":2497091,"Q_Id":3996904,"Users Score":3,"Answer":"This is more of a mathematical approach but it works 100% of the time:\nLet's say you want to use random.random() function to generate a number between a and b. To achieve this, just do the following:\nnum = (b-a)*random.random() + a;\nOf course, you can generate more numbers.","Q_Score":1721,"Tags":"python,random,integer","A_Id":53562948,"CreationDate":"2010-10-22T12:48:00.000","Title":"Generate random integers between 0 and 9","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for \"big cats\", I would want highly ranked results to return documents with \"large cats\" as well. But I may also be interested in having it return \"huge animals\", albeit at a much lower relevancy score. \nI'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How?\nFrom further research, it seems \"latent semantic analysis\" is relevant to what I'm looking for but I'm not sure how to implement it. \nAny advice on how to get this done?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1997,"Q_Id":4003840,"Users Score":0,"Answer":"First , write a piece of python code that will return you pineapple , orange , papaya when you input apple. By focusing on \"is\" relation of semantic network. Then continue with has a relationship and so on.\nI think at the end , you might get a fairly sufficient piece of code for a school project.","Q_Score":5,"Tags":"python,search,lucene,nlp,lsa","A_Id":4004384,"CreationDate":"2010-10-23T11:58:00.000","Title":"How to build a conceptual search engine?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for \"big cats\", I would want highly ranked results to return documents with \"large cats\" as well. But I may also be interested in having it return \"huge animals\", albeit at a much lower relevancy score. \nI'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How?\nFrom further research, it seems \"latent semantic analysis\" is relevant to what I'm looking for but I'm not sure how to implement it. \nAny advice on how to get this done?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1997,"Q_Id":4003840,"Users Score":1,"Answer":"This is an incredibly hard problem and it can't be solved in a way that would always produce adequate results. I'd suggest to stick to some very simple principles instead so that the results are at least predictable. I think you need 2 things: some basic morphology engine plus a dictionary of synonyms.\nWhenever a search query arrives, for each word you\n\nLook for a literal match\n\"Normalize\/canonicalze\" the word using the morphology engine, i.e. make it singular, first form, etc and look for matches\nLook for synonyms of the word\n\nThen repeat for all combinations of the input words, i.e. \"big cats\", \"big cat\", \"huge cats\" huge cat\" etc.\nIn fact, you need to store your index data in canonical form, too (singluar, first form etc) along with the literal form.\nAs for concepts, such as cats are also animals - this is where it gets tricky. It never really worked, because otherwise Google would have been returning conceptual matches already, but it's not doing that.","Q_Score":5,"Tags":"python,search,lucene,nlp,lsa","A_Id":4004024,"CreationDate":"2010-10-23T11:58:00.000","Title":"How to build a conceptual search engine?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for \"big cats\", I would want highly ranked results to return documents with \"large cats\" as well. But I may also be interested in having it return \"huge animals\", albeit at a much lower relevancy score. \nI'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How?\nFrom further research, it seems \"latent semantic analysis\" is relevant to what I'm looking for but I'm not sure how to implement it. \nAny advice on how to get this done?","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1997,"Q_Id":4003840,"Users Score":9,"Answer":"I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How?\n\nStep 1. Stop.\nStep 2. Get something to work.\nStep 3. By then, you'll understand more about Python and Lucene and other tools and ways you might integrate them.\nDon't start by trying to solve integration problems. Software can always be integrated. That's what an Operating System does. It integrates software. Sometimes you want \"tighter\" integration, but that's never the first problem to solve.\nThe first problem to solve is to get your search or concept thing or whatever it is to work as a dumb-old command-line application. Or pair of applications knit together by passing files around or knit together with OS pipes or something.\nLater, you can try and figure out how to make the user experience seamless.\nBut don't start with integration and don't stall because of integration questions. Set integration aside and get something to work.","Q_Score":5,"Tags":"python,search,lucene,nlp,lsa","A_Id":4004314,"CreationDate":"2010-10-23T11:58:00.000","Title":"How to build a conceptual search engine?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently came across Pytables and find it to be very cool. It is clear that they are superior to a csv format for very large data sets. I am running some simulations using python. The output is not so large, say 200 columns and 2000 rows. \nIf someone has experience with both, can you suggest which format would be more convenient in the long run for such data sets that are not very large. Pytables has data manipulation capabilities and browsing of the data with Vitables, but the browser does not have as much functionality as, say Excel, which can be used for CSV. Similarly, do you find one better than the other for importing and exporting data, if working mainly in python? Is one more convenient in terms of file organization? Any comments on issues such as these would be helpful.\nThanks.","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3418,"Q_Id":4022887,"Users Score":0,"Answer":"These are not \"exclusive\" choices.\nYou need both.\nCSV is just a data exchange format. If you use pytables, you still need to import and export in CSV format.","Q_Score":8,"Tags":"python,csv,pytables","A_Id":4023046,"CreationDate":"2010-10-26T10:42:00.000","Title":"Pytables vs. CSV for files that are not very large","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently came across Pytables and find it to be very cool. It is clear that they are superior to a csv format for very large data sets. I am running some simulations using python. The output is not so large, say 200 columns and 2000 rows. \nIf someone has experience with both, can you suggest which format would be more convenient in the long run for such data sets that are not very large. Pytables has data manipulation capabilities and browsing of the data with Vitables, but the browser does not have as much functionality as, say Excel, which can be used for CSV. Similarly, do you find one better than the other for importing and exporting data, if working mainly in python? Is one more convenient in terms of file organization? Any comments on issues such as these would be helpful.\nThanks.","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":3418,"Q_Id":4022887,"Users Score":2,"Answer":"One big plus for PyTables is the storage of metadata, like variables etc.\nIf you run the simulations more often with different parameters you the store the results as an array entry in the h5 file.\nWe use it to store measurement data + experiment scripts to get the data so it is all self contained.\nBTW: If you need to look quickly into a hdf5 file you can use HDFView. It's a Java app for free from the HDFGroup. It's easy to install.","Q_Score":8,"Tags":"python,csv,pytables","A_Id":7753331,"CreationDate":"2010-10-26T10:42:00.000","Title":"Pytables vs. CSV for files that are not very large","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently came across Pytables and find it to be very cool. It is clear that they are superior to a csv format for very large data sets. I am running some simulations using python. The output is not so large, say 200 columns and 2000 rows. \nIf someone has experience with both, can you suggest which format would be more convenient in the long run for such data sets that are not very large. Pytables has data manipulation capabilities and browsing of the data with Vitables, but the browser does not have as much functionality as, say Excel, which can be used for CSV. Similarly, do you find one better than the other for importing and exporting data, if working mainly in python? Is one more convenient in terms of file organization? Any comments on issues such as these would be helpful.\nThanks.","AnswerCount":6,"Available Count":3,"Score":0.0333209931,"is_accepted":false,"ViewCount":3418,"Q_Id":4022887,"Users Score":1,"Answer":"i think its very hard to comapre pytables and csv.. pyTable is a datastructure ehile CSV is an exchange format for data.","Q_Score":8,"Tags":"python,csv,pytables","A_Id":4024016,"CreationDate":"2010-10-26T10:42:00.000","Title":"Pytables vs. CSV for files that are not very large","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What would be a smart data structure to use to represent a Sudoku puzzle? I.e. a 9X9 square where each \"cell\" contains either a number or a blank.\nSpecial considerations include:\n\nAbility to compare across row, column, and in 3X3 \"group\nEase of implementation (specifically in Python)\nEfficiency (not paramount)\n\nI suppose in a pinch, a 2D array might work but that seems to be a less than elegant solution. I just would like to know if there's a better data structure.","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":7767,"Q_Id":4066075,"Users Score":2,"Answer":"Others have reasonably suggested simply using a 2D array.\nI note that a 2D array in most language implementations (anything in which that is implemented as \"array of array of X\" suffers from additional access time overhead (one access to the top level array, a second to the subarray).\nI suggest you implement the data structure abstractly as a 2D array (perhaps even continuing to use 2 indexes), but implement the array as single block of 81 cells, indexed classically by i*9+j. This gives you conceptual clarity, and somewhat more efficient implementation, by avoiding that second memory access.\nYou should be able to hide the 1D array access behind setters and getters that take 2D indexes. If your language has the capability (dunno if this is true for Python), such small methods can be inlined for additional speed.","Q_Score":6,"Tags":"python,data-structures,graph,sudoku","A_Id":4066155,"CreationDate":"2010-11-01T01:16:00.000","Title":"Proper data structure to represent a Sudoku puzzle?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given some data of shape 20x45, where each row is a separate data set, say 20 different sine curves with 45 data points each, how would I go about getting the same data, but with shape 20x100?\nIn other words, I have some data A of shape 20x45, and some data B of length 20x100, and I would like to have A be of shape 20x100 so I can compare them better.\nThis is for Python and Numpy\/Scipy.\nI assume it can be done with splines, so I am looking for a simple example, maybe just 2x10 to 2x20 or something, where each row is just a line, to demonstrate the solution.\nThanks!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5913,"Q_Id":4072844,"Users Score":0,"Answer":"If your application is not sensitive to precision or you just want a quick overview, you could just fill the unknown data points with averages from neighbouring known data points (in other words, do naive linear interpolation).","Q_Score":7,"Tags":"python,numpy,scipy","A_Id":4072921,"CreationDate":"2010-11-01T20:37:00.000","Title":"Add more sample points to data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid\/ask\/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)\n(1) Optimize for disk space, I\/O speed, or memory?\nFor simulation, the overall speed is important. We want the I\/O (really, I) speed of the data just faster than the computational engine, so we are not I\/O limited. \n(2) Store text, or something else (binary numeric)?\n(3) Given a set of choices from (1)-(2), are there any standout language\/library combinations to do the job-- Java, Python, C++, or something else?\nI would classify this code as \"write and forget\", so more points for efficiency over clarity\/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.\nEdit: this is for a Linux system (Ubuntu)\nThanks","AnswerCount":6,"Available Count":4,"Score":0.0333209931,"is_accepted":false,"ViewCount":2032,"Q_Id":4098509,"Users Score":1,"Answer":"Using D-Bus format to send the information may be to your advantage. The format is standard, binary, and D-Bus is implemented in multiple languages, and can be used to send both over the network and inter-process on the same machine.","Q_Score":2,"Tags":"java,c++,python,storage,simulation","A_Id":4098941,"CreationDate":"2010-11-04T15:51:00.000","Title":"Collecting, storing, and retrieving large amounts of numeric data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid\/ask\/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)\n(1) Optimize for disk space, I\/O speed, or memory?\nFor simulation, the overall speed is important. We want the I\/O (really, I) speed of the data just faster than the computational engine, so we are not I\/O limited. \n(2) Store text, or something else (binary numeric)?\n(3) Given a set of choices from (1)-(2), are there any standout language\/library combinations to do the job-- Java, Python, C++, or something else?\nI would classify this code as \"write and forget\", so more points for efficiency over clarity\/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.\nEdit: this is for a Linux system (Ubuntu)\nThanks","AnswerCount":6,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2032,"Q_Id":4098509,"Users Score":0,"Answer":"If you are just storing, then use system tools. Don't write your own. If you need to do some real-time processing of the data before it is stored, then that's something completely different.","Q_Score":2,"Tags":"java,c++,python,storage,simulation","A_Id":4098550,"CreationDate":"2010-11-04T15:51:00.000","Title":"Collecting, storing, and retrieving large amounts of numeric data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid\/ask\/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)\n(1) Optimize for disk space, I\/O speed, or memory?\nFor simulation, the overall speed is important. We want the I\/O (really, I) speed of the data just faster than the computational engine, so we are not I\/O limited. \n(2) Store text, or something else (binary numeric)?\n(3) Given a set of choices from (1)-(2), are there any standout language\/library combinations to do the job-- Java, Python, C++, or something else?\nI would classify this code as \"write and forget\", so more points for efficiency over clarity\/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.\nEdit: this is for a Linux system (Ubuntu)\nThanks","AnswerCount":6,"Available Count":4,"Score":0.0333209931,"is_accepted":false,"ViewCount":2032,"Q_Id":4098509,"Users Score":1,"Answer":"Actually, this is quite similar to what I'm doing, which is monitoring changes players make to the world in a game. I'm currently using an sqlite database with python.\nAt the start of the program, I load the disk database into memory, for fast writing procedures. Each change is put in to two lists. These lists are for both the memory database and the disk database. Every x or so updates, the memory database is updated, and a counter is pushed up one. This is repeated, and when the counter equals 5, it's reset and the list with changes for the disk is flushed to the disk database and the list is cleared.I have found this works well if I also set the writing more to WOL(Write Ahead Logging). This method can stand about 100-300 updates a second if I update memory every 100 updates and the disk counter is set to update every 5 memory updates. You should probobly choose binary, sense, unless you have faults in your data sources, would be most logical","Q_Score":2,"Tags":"java,c++,python,storage,simulation","A_Id":4098613,"CreationDate":"2010-11-04T15:51:00.000","Title":"Collecting, storing, and retrieving large amounts of numeric data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid\/ask\/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)\n(1) Optimize for disk space, I\/O speed, or memory?\nFor simulation, the overall speed is important. We want the I\/O (really, I) speed of the data just faster than the computational engine, so we are not I\/O limited. \n(2) Store text, or something else (binary numeric)?\n(3) Given a set of choices from (1)-(2), are there any standout language\/library combinations to do the job-- Java, Python, C++, or something else?\nI would classify this code as \"write and forget\", so more points for efficiency over clarity\/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.\nEdit: this is for a Linux system (Ubuntu)\nThanks","AnswerCount":6,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":2032,"Q_Id":4098509,"Users Score":3,"Answer":"Optimizing for disk space and IO speed is the same thing - these days, CPUs are so fast compared to IO that it's often overall faster to compress data before storing it (you may actually want to do that). I don't really see memory playing a big role (though you should probably use a reasonably-sized buffer to ensure you're doing sequential writes).\nBinary is more compact (and thus faster). Given the amount of data, I doubt whether being human-readable has any value. The only advantage of a text format would be that it's easier to figure out and correct if it gets corrupted or you lose the parsing code.","Q_Score":2,"Tags":"java,c++,python,storage,simulation","A_Id":4098582,"CreationDate":"2010-11-04T15:51:00.000","Title":"Collecting, storing, and retrieving large amounts of numeric data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing and distributed image processing application using hadoop streaming, python, matlab, and elastic map reduce. I have compiled a binary executable of my matlab code using the matlab compiler. I am wondering how I can incorporate this into my workflow so the binary is part of the processing on Amazon's elastic map reduce?\nIt looks like I have to use the Hadoop Distributed Cache?\nThe code is very complicated (and not written by me) so porting it to another language is not possible right now.\nTHanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1138,"Q_Id":4101815,"Users Score":0,"Answer":"The following is not exactly an answer to your Hadoop question, but I couldn't resist not asking why you don't execute your processing jobs on the Grid resources? There are proven solutions for executing compute intensive workflows on the Grid. And as far as I know matlab runtime environment is usually available on these resources. You may also consider using the Grid especially if you are in academia.\nGood luck","Q_Score":1,"Tags":"python,matlab,amazon-web-services,hadoop,mapreduce","A_Id":4101917,"CreationDate":"2010-11-04T22:01:00.000","Title":"Hadoop\/Elastic Map Reduce with binary executable?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a csv file which contains rows from a sqlite3 database. I wrote the rows to the csv file using python.\nWhen I open the csv file with Ms Excel, a blank row appears below every row, but the file on notepad is fine(without any blanks).\nDoes anyone know why this is happenning and how I can fix it?\nEdit: I used the strip() function for all the attributes before writing a row.\nThanks.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":7209,"Q_Id":4122794,"Users Score":34,"Answer":"You're using open('file.csv', 'w')--try open('file.csv', 'wb'). \nThe Python csv module requires output files be opened in binary mode.","Q_Score":15,"Tags":"python,excel,csv","A_Id":4122980,"CreationDate":"2010-11-08T10:00:00.000","Title":"Csv blank rows problem with Excel","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a csv file which contains rows from a sqlite3 database. I wrote the rows to the csv file using python.\nWhen I open the csv file with Ms Excel, a blank row appears below every row, but the file on notepad is fine(without any blanks).\nDoes anyone know why this is happenning and how I can fix it?\nEdit: I used the strip() function for all the attributes before writing a row.\nThanks.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":7209,"Q_Id":4122794,"Users Score":0,"Answer":"the first that comes into my mind (just an idea) is that you might have used \"\\r\\n\" as row delimiter (which is shown as one linebrak in notepad) but excel expects to get only \"\\n\" or only \"\\r\" and so it interprets this as two line-breaks.","Q_Score":15,"Tags":"python,excel,csv","A_Id":4122816,"CreationDate":"2010-11-08T10:00:00.000","Title":"Csv blank rows problem with Excel","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting this error:\n\n\/sw\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_macosx.py:235:\n UserWarning: Python is not installed as a framework. The MacOSX\n backend may not work correctly if Python is not installed as a\n framework. Please see the Python documentation for more information on\n installing Python as a framework on Mac OS X\n\nI installed python27 using fink and it's using the default matplotlib is using macosx framework.","AnswerCount":11,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":25060,"Q_Id":4130355,"Users Score":0,"Answer":"Simply aliasing a new command to launch python in ~\/.bash_profile will do the trick.\nalias vpython3=\/Library\/Frameworks\/Python.framework\/Versions\/3.6(replace with your own python version)\/bin\/python3\nthen 'source ~\/.bash_profile' and use vpython3 to launch python3.\nExplanation: Python is actually by default installed as framework on mac, but using virtualenv would link your python3 command under the created virtual environment, instead of the above framework directory ('which python3' in terminal and you'll see that). Perhaps Matplotlib has to find the bin\/ include\/ lib\/,etc in the python framework.","Q_Score":62,"Tags":"python,macos,matplotlib,fink","A_Id":53662588,"CreationDate":"2010-11-09T03:49:00.000","Title":"python matplotlib framework under macosx?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting this error:\n\n\/sw\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_macosx.py:235:\n UserWarning: Python is not installed as a framework. The MacOSX\n backend may not work correctly if Python is not installed as a\n framework. Please see the Python documentation for more information on\n installing Python as a framework on Mac OS X\n\nI installed python27 using fink and it's using the default matplotlib is using macosx framework.","AnswerCount":11,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":25060,"Q_Id":4130355,"Users Score":18,"Answer":"There are two ways Python can be built and installed on Mac OS X. One is as a traditional flat Unix-y shared library. The other is known as a framework install, a file layout similar to other frameworks on OS X where all of the component directories (include, lib, bin) for the product are installed as subdirectories under the main framework directory. The Fink project installs Pythons using the Unix shared library method. Most other distributors, including the Apple-supplied Pythons in OS X, the python.org installers, and the MacPorts project, install framework versions of Python. One of the advantages of a framework installation is that it will work properly with various OS X API calls that require a window manager connection (generally GUI-related interfaces) because the Python interpreter is packaged as an app bundle within the framework.\nIf you do need the functions in matplotlib that require the GUI functions, the simplest approach may be to switch to MacPorts which also packages matplotlib (port py27-matplotlib) and its dependencies. If so, be careful not to mix packages between Fink and MacPorts. It's best to stick with one or the other unless you are really careful. Adjust your shell path accordingly; it would be safest to remove all of the Fink packages and install MacPorts versions.","Q_Score":62,"Tags":"python,macos,matplotlib,fink","A_Id":4131726,"CreationDate":"2010-11-09T03:49:00.000","Title":"python matplotlib framework under macosx?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting this error:\n\n\/sw\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_macosx.py:235:\n UserWarning: Python is not installed as a framework. The MacOSX\n backend may not work correctly if Python is not installed as a\n framework. Please see the Python documentation for more information on\n installing Python as a framework on Mac OS X\n\nI installed python27 using fink and it's using the default matplotlib is using macosx framework.","AnswerCount":11,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":25060,"Q_Id":4130355,"Users Score":31,"Answer":"Optionally you could use the Agg backend which requires no extra installation of anything. Just put backend : Agg into ~\/.matplotlib\/matplotlibrc","Q_Score":62,"Tags":"python,macos,matplotlib,fink","A_Id":33873802,"CreationDate":"2010-11-09T03:49:00.000","Title":"python matplotlib framework under macosx?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when I used NumPy I stored it's data in the native format *.npy. It's very fast and gave me some benefits, like this one\n\nI could read *.npy from C code as\nsimple binary data(I mean *.npy are\nbinary-compatibly with C structures)\n\nNow I'm dealing with HDF5 (PyTables at this moment). As I read in the tutorial, they are using NumPy serializer to store NumPy data, so I can read these data from C as from simple *.npy files?\nDoes HDF5's numpy are binary-compatibly with C structures too?\nUPD :\nI have matlab client reading from hdf5, but don't want to read hdf5 from C++ because reading binary data from *.npy is times faster, so I really have a need in reading hdf5 from C++ (binary-compatibility)\nSo I'm already using two ways for transferring data - *.npy (read from C++ as bytes,from Python natively) and hdf5 (access from Matlab)\nAnd if it's possible,want to use the only one way - hdf5, but to do this I have to find a way to make hdf5 binary-compatibly with C++ structures, pls help, If there is some way to turn-off compression in hdf5 or something else to make hdf5 binary-compatibly with C++ structures - tell me where i can read about it...","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6260,"Q_Id":4133327,"Users Score":0,"Answer":"HDF5 takes care of binary compatibility of structures for you. You simply have to tell it what your structs consist of (dtype) and you'll have no problems saving\/reading record arrays - this is because the type system is basically 1:1 between numpy and HDF5. If you use H5py I'm confident to say the IO should be fast enough provided you use all native types and large batched reads\/writes - the entire dataset of allowable. After that it depends on chunking and what filters (shuffle, compression for example) - it's also worth noting sometimes those can speed up by greatly reducing file size so always look at benchmarks. Note that the the type and filter choices are made on the end creating the HDF5 document.\nIf you're trying to parse HDF5 yourself, you're doing it wrong. Use the C++ and C apis if you're working in C++\/C. There are examples of so called \"compound types\" on the HDF5 groups website.","Q_Score":2,"Tags":"python,c,numpy,hdf5,pytables","A_Id":30009455,"CreationDate":"2010-11-09T11:51:00.000","Title":"HDF5 : storing NumPy data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any package in python nltk that can produce all different parts of speech words for a given word. For example if i give add(verb) then it must produce addition(noun),additive(adj) and so on. Can anyone let me know?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":430,"Q_Id":4150443,"Users Score":2,"Answer":"There are two options i can think of off the top of my head:\nOption one is to iterate over the sample POS-tagged corpora and simply build this mapping yourself. This gives you the POS tags that are associated with a particular word in the corpora.\nOption two is to build a hidden markov model POS tagger on the corpora, then inspect the values of the model. This gives you the POS tags that are associated with a particular word in the corpora plus their a priori probabilities, as well as some other statistical data. \nDepending on what your use-case is, one may be better than the other. I would start with option one, since it's fast and easy.","Q_Score":0,"Tags":"python,nltk","A_Id":4156445,"CreationDate":"2010-11-11T00:32:00.000","Title":"Extract different POS words for a given word in python nltk","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any package in python nltk that can produce all different parts of speech words for a given word. For example if i give add(verb) then it must produce addition(noun),additive(adj) and so on. Can anyone let me know?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":430,"Q_Id":4150443,"Users Score":0,"Answer":"NLTK has a lot of clever things hiding away, so there might be a direct way of doing it. However, I think you may have to write your own code to work with the WordNet database.","Q_Score":0,"Tags":"python,nltk","A_Id":4155066,"CreationDate":"2010-11-11T00:32:00.000","Title":"Extract different POS words for a given word in python nltk","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with model output at the moment, and I can't seem to come up with a nice way of combining two arrays of data. Arrays A and B store different data, and the entries in each correspond to some spatial (x,y) point -- A holds some parameter, and B holds model output. The problem is that B is a spatial subsection of A -- that is, if the model were for the entire world, A would store the parameter at each point on the earth, and B would store the model output only for those points in Africa.\nSo I need to find how much B is offset from A -- put another way, I need to find the indexes at which they start to overlap. So if A.shape=(1000,1500), is B the (750:850, 200:300) part of that, or the (783:835, 427:440) subsection? I have arrays associated with both A and B which store the (x,y) positions of the gridpoints for each.\nThis would seem to be a simple problem -- find where the two arrays overlap. And I can solve it with scipy.spatial's KDTree simply enough, but it's very slow. Anyone have any better ideas?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1367,"Q_Id":4150909,"Users Score":0,"Answer":"Can you say more? What model are you using? What are you modelling? How is it computed?\nCan you make the dimensions match to avoid the fit? (i.e. if B doesn't depend on all of A, only plug in the part of A that B models, or compute boring values for the parts of B that wouldn't overlap A and drop those values later)","Q_Score":6,"Tags":"python,multidimensional-array,numpy,subdomain,overlap","A_Id":4151409,"CreationDate":"2010-11-11T02:26:00.000","Title":"Where do two 2-D arrays begin to overlap each other?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with model output at the moment, and I can't seem to come up with a nice way of combining two arrays of data. Arrays A and B store different data, and the entries in each correspond to some spatial (x,y) point -- A holds some parameter, and B holds model output. The problem is that B is a spatial subsection of A -- that is, if the model were for the entire world, A would store the parameter at each point on the earth, and B would store the model output only for those points in Africa.\nSo I need to find how much B is offset from A -- put another way, I need to find the indexes at which they start to overlap. So if A.shape=(1000,1500), is B the (750:850, 200:300) part of that, or the (783:835, 427:440) subsection? I have arrays associated with both A and B which store the (x,y) positions of the gridpoints for each.\nThis would seem to be a simple problem -- find where the two arrays overlap. And I can solve it with scipy.spatial's KDTree simply enough, but it's very slow. Anyone have any better ideas?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1367,"Q_Id":4150909,"Users Score":0,"Answer":"I need to find the indexes at which they start to overlap\n\nSo are you looking for indexes from A or from B? And is B strictly rectangular?\nFinding the bounding box or convex hull of B is really cheap.","Q_Score":6,"Tags":"python,multidimensional-array,numpy,subdomain,overlap","A_Id":4191918,"CreationDate":"2010-11-11T02:26:00.000","Title":"Where do two 2-D arrays begin to overlap each other?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use python heavily for manipulating data and then packaging it for statistical modeling (R through RPy2). \nFeeling a little restless, I would like to branch out into other languages where \n\nFaster than python\nIt's free\nThere's good books, documentations and tutorials\nVery suitable for data manipulation\nLots of libraries for statistical modeling\n\nAny recommendations?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":210,"Q_Id":4155955,"Users Score":1,"Answer":"If you just want to learn a new language you could take a look at scala. The language is influenced by languages like ruby, python and erlang, but is staticaly typed and runs on the JVM. The speed is comparable to Java. And you can use all the java libraries, plus reuse a lot of your python code through jython.","Q_Score":1,"Tags":"python,programming-languages","A_Id":4156169,"CreationDate":"2010-11-11T15:19:00.000","Title":"moving on from python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use python heavily for manipulating data and then packaging it for statistical modeling (R through RPy2). \nFeeling a little restless, I would like to branch out into other languages where \n\nFaster than python\nIt's free\nThere's good books, documentations and tutorials\nVery suitable for data manipulation\nLots of libraries for statistical modeling\n\nAny recommendations?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":210,"Q_Id":4155955,"Users Score":1,"Answer":"I didn't see you mention SciPy on your list... I tend like R syntax better, but they cover much of the same ground. SciPy has faster matrix and array structures than the general purpose Python ones. Mostly places where I have wanted to use Cython, SciPy has been just as easy \/ fast. \nGNU\/Octave is an open\/free version of Matlab which might also interest you.","Q_Score":1,"Tags":"python,programming-languages","A_Id":4157423,"CreationDate":"2010-11-11T15:19:00.000","Title":"moving on from python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to get more than 9 subplots in matplotlib?\nI am on the subplots command pylab.subplot(449); how can I get a 4410 to work?\nThank you very much.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":28836,"Q_Id":4158367,"Users Score":72,"Answer":"It was easier than I expected, I just did: pylab.subplot(4,4,10) and it worked.","Q_Score":51,"Tags":"python,charts,matplotlib","A_Id":4158455,"CreationDate":"2010-11-11T19:19:00.000","Title":"more than 9 subplots in matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in dire need of a classification task example using LibSVM in python. I don't know how the Input should look like and which function is responsible for training and which one for testing\nThanks","AnswerCount":8,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":50030,"Q_Id":4214868,"Users Score":13,"Answer":"LIBSVM reads the data from a tuple containing two lists. The first list contains the classes and the second list contains the input data. create simple dataset with two possible classes\nyou also need to specify which kernel you want to use by creating svm_parameter.\n\n\n>> from libsvm import *\n>> prob = svm_problem([1,-1],[[1,0,1],[-1,0,-1]])\n>> param = svm_parameter(kernel_type = LINEAR, C = 10)\n ## training the model\n>> m = svm_model(prob, param)\n#testing the model\n>> m.predict([1, 1, 1])","Q_Score":25,"Tags":"python,machine-learning,svm,libsvm","A_Id":4215056,"CreationDate":"2010-11-18T12:44:00.000","Title":"An example using python bindings for SVM library, LIBSVM","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in dire need of a classification task example using LibSVM in python. I don't know how the Input should look like and which function is responsible for training and which one for testing\nThanks","AnswerCount":8,"Available Count":2,"Score":0.0748596907,"is_accepted":false,"ViewCount":50030,"Q_Id":4214868,"Users Score":3,"Answer":"Adding to @shinNoNoir :\nparam.kernel_type represents the type of kernel function you want to use,\n0: Linear\n1: polynomial\n2: RBF\n3: Sigmoid\nAlso have in mind that, svm_problem(y,x) : here y is the class labels and x is the class instances and x and y can only be lists,tuples and dictionaries.(no numpy array)","Q_Score":25,"Tags":"python,machine-learning,svm,libsvm","A_Id":8302624,"CreationDate":"2010-11-18T12:44:00.000","Title":"An example using python bindings for SVM library, LIBSVM","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anyone have any experience using r\/python with data stored in Solid State Drives. If you are doing mostly reads, in theory this should significantly improve the load times of large datasets. I want to find out if this is true and if it is worth investing in SSDs for improving the IO rates in data intensive applications.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4485,"Q_Id":4262984,"Users Score":0,"Answer":"The read and write times for SSDs are significantly higher than standard 7200 RPM disks (it's still worth it with a 10k RPM disk, not sure how much of an improvement it is over a 15k). So, yes, you'd get much faster times on data access.\nThe performance improvement is undeniable. Then, it's a question of economics. 2TB 7200 RPM disks are $170 a piece, and 100GB SSDS cost $210. So if you have a lot of data, you may run into a problem.\nIf you read\/write a lot of data, get an SSD. If the application is CPU intensive, however, you'd benefit much more from getting a better processor.","Q_Score":12,"Tags":"python,r,data-analysis,solid-state-drive","A_Id":4263022,"CreationDate":"2010-11-24T02:31:00.000","Title":"Data analysis using R\/python and SSDs","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anyone have any experience using r\/python with data stored in Solid State Drives. If you are doing mostly reads, in theory this should significantly improve the load times of large datasets. I want to find out if this is true and if it is worth investing in SSDs for improving the IO rates in data intensive applications.","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":4485,"Q_Id":4262984,"Users Score":2,"Answer":"I have to second John's suggestion to profile your application. My experience is that it isn't the actual data reads that are the slow part, it's the overhead of creating the programming objects to contain the data, casting from strings, memory allocation, etc. \nI would strongly suggest you profile your code first, and consider using alternative libraries (like numpy) to see what improvements you can get before you invest in hardware.","Q_Score":12,"Tags":"python,r,data-analysis,solid-state-drive","A_Id":4264161,"CreationDate":"2010-11-24T02:31:00.000","Title":"Data analysis using R\/python and SSDs","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am planning to use SlopeOne algorithm to predict if a gamer can complete a given level in a Game or not?\nHere is the scenario:\n\nLots of Gamers play and try to complete 100 levels in the game.\nEach gamer can play a level as many times as they want until they cross the level.\nThe system keeps track of the level and the number of ReTries for each level.\nEach Game Level falls into one of the 3 categories (Easy, Medium, Hard)\nApproximate distribution of the levels is 33% across each category meaning 33% of the levels are Easy, 33% of the levels are Hard etc.\n\nUsing this information:\nWhen a new gamer starts playing the game, after a few levels, I want to be able to predict \nwhich level can the Gamer Cross easily and which levels can he\/she not cross easily.\nwith this predictive ability I would like to present the game levels that the user would be able to cross with 50% probability.\nCan I use SlopeOne algorithm for this?\nReasoning is I see a lot of similarities between what I want to with say a movie rating system.\nn users, m items and N ratings to predict user rating for a given item.\nSimilarly, in my case, I have\nn users, m levels and N Retries ...\nThe only difference being in a movie rating system the rating is fixed on a 1-5 scale and in my case the retries can range from 1-x (x could be as high as 30)\nwhile theoretically someone could retry more 30 times, for now I could start with fixing the upper limit at 30 and adjust after I have more data.\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":431,"Q_Id":4273169,"Users Score":2,"Answer":"I think it might work, but I would apply log to the number of tries (you can't do log(0) so retries won't work) first. If someone found a level easy they would try it once or twice, whereas people who found it hard would generally have to do it over and over again. The difference between did it in 1 go vs 2 goes is much greater than 20 goes vs 21 goes. This would remove the need to place an arbitrary limit on the number of goes value.","Q_Score":8,"Tags":"python,algorithm,filtering,prediction,collaborative","A_Id":5586430,"CreationDate":"2010-11-25T02:03:00.000","Title":"Using SlopeOne algorithm to predict if a gamer can complete a level in a Game?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a reversible hash function (obviously the input will be much smaller in size than the output) that maps the input to the output in a random-looking way. Basically, I want a way to transform a number like \"123\" to a larger number like \"9874362483910978\", but not in a way that will preserve comparisons, so it must not be always true that, if x1 > x2, f(x1) > f(x2) (but neither must it be always false).\nThe use case for this is that I need to find a way to transform small numbers into larger, random-looking ones. They don't actually need to be random (in fact, they need to be deterministic, so the same input always maps to the same output), but they do need to look random (at least when base64encoded into strings, so shifting by Z bits won't work as similar numbers will have similar MSBs).\nAlso, easy (fast) calculation and reversal is a plus, but not required.\nI don't know if I'm being clear, or if such an algorithm exists, but I'd appreciate any and all help!","AnswerCount":5,"Available Count":2,"Score":0.1586485043,"is_accepted":false,"ViewCount":47252,"Q_Id":4273466,"Users Score":4,"Answer":"Why not just XOR with a nice long number?\nEasy. Fast. Reversible.\nOr, if this doesn't need to be terribly secure, you could convert from base 10 to some smaller base (like base 8 or base 4, depending on how long you want the numbers to be).","Q_Score":50,"Tags":"python,hash","A_Id":4273543,"CreationDate":"2010-11-25T03:18:00.000","Title":"Reversible hash function?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a reversible hash function (obviously the input will be much smaller in size than the output) that maps the input to the output in a random-looking way. Basically, I want a way to transform a number like \"123\" to a larger number like \"9874362483910978\", but not in a way that will preserve comparisons, so it must not be always true that, if x1 > x2, f(x1) > f(x2) (but neither must it be always false).\nThe use case for this is that I need to find a way to transform small numbers into larger, random-looking ones. They don't actually need to be random (in fact, they need to be deterministic, so the same input always maps to the same output), but they do need to look random (at least when base64encoded into strings, so shifting by Z bits won't work as similar numbers will have similar MSBs).\nAlso, easy (fast) calculation and reversal is a plus, but not required.\nI don't know if I'm being clear, or if such an algorithm exists, but I'd appreciate any and all help!","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":47252,"Q_Id":4273466,"Users Score":19,"Answer":"What you are asking for is encryption. A block cipher in its basic mode of operation, ECB, reversibly maps a input block onto an output block of the same size. The input and output blocks can be interpreted as numbers.\nFor example, AES is a 128 bit block cipher, so it maps an input 128 bit number onto an output 128 bit number. If 128 bits is good enough for your purposes, then you can simply pad your input number out to 128 bits, transform that single block with AES, then format the output as a 128 bit number.\nIf 128 bits is too large, you could use a 64 bit block cipher, like 3DES, IDEA or Blowfish.\nECB mode is considered weak, but its weakness is the constraint that you have postulated as a requirement (namely, that the mapping be \"deterministic\"). This is a weakness, because once an attacker has observed that 123 maps to 9874362483910978, from then on whenever she sees the latter number, she knows the plaintext was 123. An attacker can perform frequency analysis and\/or build up a dictionary of known plaintext\/ciphertext pairs.","Q_Score":50,"Tags":"python,hash","A_Id":4274259,"CreationDate":"2010-11-25T03:18:00.000","Title":"Reversible hash function?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been a long time user of R and have recently started working with Python. Using conventional RDBMS systems for data warehousing, and R\/Python for number-crunching, I feel the need now to get my hands dirty with Big Data Analysis.\nI'd like to know how to get started with Big Data crunching.\n- How to start simple with Map\/Reduce and the use of Hadoop\n\nHow can I leverage my skills in R and Python to get started with Big Data analysis. Using the Python Disco project for example.\nUsing the RHIPE package and finding toy datasets and problem areas.\nFinding the right information to allow me to decide if I need to move to NoSQL from RDBMS type databases\n\nAll in all, I'd like to know how to start small and gradually build up my skills and know-how in Big Data Analysis.\nThank you for your suggestions and recommendations.\nI apologize for the generic nature of this query, but I'm looking to gain more perspective regarding this topic.\n\nHarsh","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":18227,"Q_Id":4322559,"Users Score":29,"Answer":"Using the Python Disco project for example.\n\nGood. Play with that.\n\nUsing the RHIPE package and finding toy datasets and problem areas.\n\nFine. Play with that, too.\nDon't sweat finding \"big\" datasets. Even small datasets present very interesting problems. Indeed, any dataset is a starting-off point.\nI once built a small star-schema to analyze the $60M budget of an organization. The source data was in spreadsheets, and essentially incomprehensible. So I unloaded it into a star schema and wrote several analytical programs in Python to create simplified reports of the relevant numbers.\n\nFinding the right information to allow me to decide if I need to move to NoSQL from RDBMS type databases\n\nThis is easy.\nFirst, get a book on data warehousing (Ralph Kimball's The Data Warehouse Toolkit) for example.\nSecond, study the \"Star Schema\" carefully -- particularly all the variants and special cases that Kimball explains (in depth)\nThird, realize the following: SQL is for Updates and Transactions. \nWhen doing \"analytical\" processing (big or small) there's almost no update of any kind. SQL (and related normalization) don't really matter much any more.\nKimball's point (and others, too) is that most of your data warehouse is not in SQL, it's in simple Flat Files. A data mart (for ad-hoc, slice-and-dice analysis) may be in a relational database to permit easy, flexible processing with SQL.\nSo the \"decision\" is trivial. If it's transactional (\"OLTP\") it must be in a Relational or OO DB. If it's analytical (\"OLAP\") it doesn't require SQL except for slice-and-dice analytics; and even then the DB is loaded from the official files as needed.","Q_Score":41,"Tags":"python,r,hadoop,bigdata","A_Id":4323638,"CreationDate":"2010-12-01T08:45:00.000","Title":"How to get started with Big Data Analysis","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to generate a random graph that has small-world properties (exhibits a power law distribution). I just started using the networkx package and discovered that it offers a variety of random graph generation. Can someone tell me if it possible to generate a graph where a given node's degree follows a gamma distribution (either in R or using python's networkx package)?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":9242,"Q_Id":4328837,"Users Score":1,"Answer":"I know this is very late, but you can do the same thing, albeit a little more straightforward, with mathematica. \nRandomGraph[DegreeGraphDistribution[{3, 3, 3, 3, 3, 3, 3, 3}], 4]\nThis will generate 4 random graphs, with each node having a prescribed degree.","Q_Score":7,"Tags":"python,algorithm,r,graph,networkx","A_Id":19205464,"CreationDate":"2010-12-01T20:33:00.000","Title":"Generating a graph with certain degree distribution?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to generate a random graph that has small-world properties (exhibits a power law distribution). I just started using the networkx package and discovered that it offers a variety of random graph generation. Can someone tell me if it possible to generate a graph where a given node's degree follows a gamma distribution (either in R or using python's networkx package)?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":9242,"Q_Id":4328837,"Users Score":2,"Answer":"I did this a while ago in base Python... IIRC, I used the following method. From memory, so this may not be entirely accurate, but hopefully it's worth something:\n\nChose the number of nodes, N, in your graph, and the density (existing edges over possible edges), D. This implies the number of edges, E.\nFor each node, assign its degree by first choosing a random positive number x and finding P(x), where P is your pdf. The node's degree is (P(x)*E\/2) -1.\nChose a node at random, and connect it to another random node. If either node has realized its assigned degree, eliminate it from further selection. Repeat E times. \n\nN.B. that this doesn't create a connected graph in general.","Q_Score":7,"Tags":"python,algorithm,r,graph,networkx","A_Id":4329072,"CreationDate":"2010-12-01T20:33:00.000","Title":"Generating a graph with certain degree distribution?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to quantize a 24bit image to 16bit color depth using Python Imaging.\nPIL used to provide a method im.quantize(colors, **options) however this has been deprecated for out = im.convert(\"P\", palette=Image.ADAPTIVE, colors=256)\nUnfortunately 256 is the MAXIMUM number of colors that im.convert() will quantize to (8 bit only).\nHow can I quantize a 24bit image down to 16bit using PIL (or similar)?\nthanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4447,"Q_Id":4345337,"Users Score":3,"Answer":"You might want to look into converting your image to a numpy array, performing your quantisation, then converting back to PIL.\nThere are modules in numpy to convert to\/from PIL images.","Q_Score":4,"Tags":"python,python-imaging-library,imaging","A_Id":4345485,"CreationDate":"2010-12-03T12:04:00.000","Title":"Python Imaging, how to quantize an image to 16bit depth?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to save a Matplotlib figure such that it can be re-opened and have typical interaction restored? (Like the .fig format in MATLAB?)\nI find myself running the same scripts many times to generate these interactive figures. Or I'm sending my colleagues multiple static PNG files to show different aspects of a plot. I'd rather send the figure object and have them interact with it themselves.","AnswerCount":7,"Available Count":1,"Score":0.057080742,"is_accepted":false,"ViewCount":98342,"Q_Id":4348733,"Users Score":2,"Answer":"Good question. Here is the doc text from pylab.save:\n\npylab no longer provides a save function, though the old pylab\n function is still available as matplotlib.mlab.save (you can still\n refer to it in pylab as \"mlab.save\"). However, for plain text\n files, we recommend numpy.savetxt. For saving numpy arrays,\n we recommend numpy.save, and its analog numpy.load, which are\n available in pylab as np.save and np.load.","Q_Score":144,"Tags":"python,matplotlib","A_Id":4348902,"CreationDate":"2010-12-03T18:41:00.000","Title":"Saving interactive Matplotlib figures","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an algorithm in python which creates measures for pairs of values, where m(v1, v2) == m(v2, v1) (i.e. it is symmetric). I had the idea to write a dictionary of dictionaries where these values are stored in a memory-efficient way, so that they can easily be retrieved with keys in any order. I like to inherit from things, and ideally, I'd love to write a symmetric_dict where s_d[v1][v2] always equals s_d[v2][v1], probably by checking which of the v's is larger according to some kind of ordering relation and then switching them around so that the smaller element one is always mentioned first. i.e. when calling s_d[5][2] = 4, the dict of dicts will turn them around so that they are in fact stored as s_d[2][5] = 4, and the same for retrieval of the data.\nI'm also very open for a better data structure, but I'd prefer an implementation with \"is-a\" relationship to something which just uses a dict and preprocesses some function arguments.","AnswerCount":7,"Available Count":1,"Score":0.0285636566,"is_accepted":false,"ViewCount":1589,"Q_Id":4368423,"Users Score":1,"Answer":"An obvious alternative is to use a (v1,v2) tuple as the key into a single standard dict, and insert both (v1,v2) and (v2,v1) into the dictionary, making them refer to the same object on the right-hand side.","Q_Score":4,"Tags":"python,inheritance,dictionary","A_Id":4368488,"CreationDate":"2010-12-06T16:11:00.000","Title":"Symmetric dictionary where d[a][b] == d[b][a]","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does PyPy work with NLTK, and if so, is there an appreciable performance improvement, say for the bayesian classifier? \nWhile we're at it, do any of the other python environments (shedskin, etc) offer better nlkt performance than cpython?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2318,"Q_Id":4390129,"Users Score":4,"Answer":"I got a response via email (Seo, please feel free to respond here) that said:\nThe main issues are:\nPyPy implements Python 2.5. This means adding \"from future import with_statement\" here and there, rewriting usages of property.setter, and fixing up new in 2.6 library calls like os.walk.\nNLTK needs PyYAML. Simply symlinking (or copying) stuffs to pypy-1.4\/site-packages work.\nAnd:\nDo you have NLTK running with PyPy, and if so are you seeing performance improvements?\nYes, and yes.\nSo apparently NLTK does run with PyPy and there are performance improvements.","Q_Score":15,"Tags":"python,nltk,pypy","A_Id":4854162,"CreationDate":"2010-12-08T17:01:00.000","Title":"Does PyPy work with NLTK?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does PyPy work with NLTK, and if so, is there an appreciable performance improvement, say for the bayesian classifier? \nWhile we're at it, do any of the other python environments (shedskin, etc) offer better nlkt performance than cpython?","AnswerCount":3,"Available Count":2,"Score":0.3215127375,"is_accepted":false,"ViewCount":2318,"Q_Id":4390129,"Users Score":5,"Answer":"At least some of NLTK does work with PyPy and there is some performance gain, according to someone on #pypy on freenode. Have you run any tests? Just download PyPy from pypy.org\/download.html and instead of \"time python yourscript.py data.txt\" type \"time pypy yourscript.py data.txt\".","Q_Score":15,"Tags":"python,nltk,pypy","A_Id":4549093,"CreationDate":"2010-12-08T17:01:00.000","Title":"Does PyPy work with NLTK?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the easiest way to save and load data in python, preferably in a human-readable output format?\nThe data I am saving\/loading consists of two vectors of floats. Ideally, these vectors would be named in the file (e.g. X and Y).\nMy current save() and load() functions use file.readline(), file.write() and string-to-float conversion. There must be something better.","AnswerCount":7,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":62035,"Q_Id":4450144,"Users Score":8,"Answer":"If it should be human-readable, I'd\nalso go with JSON. Unless you need to\nexchange it with enterprise-type\npeople, they like XML better. :-)\nIf it should be human editable and\nisn't too complex, I'd probably go\nwith some sort of INI-like format,\nlike for example configparser.\nIf it is complex, and doesn't need to\nbe exchanged, I'd go with just\npickling the data, unless it's very\ncomplex, in which case I'd use ZODB.\nIf it's a LOT of data, and needs to\nbe exchanged, I'd use SQL.\n\nThat pretty much covers it, I think.","Q_Score":14,"Tags":"python,io","A_Id":4450277,"CreationDate":"2010-12-15T13:02:00.000","Title":"easy save\/load of data in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I extract the first paragraph from a Wikipedia article, using Python?\nFor example, for Albert Einstein, that would be:\n\nAlbert Einstein (pronounced \/\u02c8\u00e6lb\u0259rt\n \u02c8a\u026ansta\u026an\/; German: [\u02c8alb\u0250t \u02c8a\u026an\u0283ta\u026an]\n ( listen); 14 March 1879 \u2013 18 April\n 1955) was a theoretical physicist,\n philosopher and author who is widely\n regarded as one of the most\n influential and iconic scientists and\n intellectuals of all time. A\n German-Swiss Nobel laureate, Einstein\n is often regarded as the father of\n modern physics.[2] He received the\n 1921 Nobel Prize in Physics \"for his\n services to theoretical physics, and\n especially for his discovery of the\n law of the photoelectric effect\".[3]","AnswerCount":10,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49932,"Q_Id":4460921,"Users Score":0,"Answer":"Try a combination of urllib to fetch the site and BeautifulSoup or lxml to parse the data.","Q_Score":42,"Tags":"python,wikipedia","A_Id":4460959,"CreationDate":"2010-12-16T12:49:00.000","Title":"Extract the first paragraph from a Wikipedia article (Python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been reading a lot of stackoverflow questions about how to use the breadth-first search, dfs, A*, etc, the question is what is the optimal usage and how to implement it in reality verse simulated graphs. E.g.\nConsider you have a social graph of Twitter\/Facebook\/Some social networking site, to me it seems a search algorithm would work as follows:\nIf user A had 10 friends, then one of those had 2 friends and another 3. The search would first figure out who user A's friends were, then it would have to look up who the friends where to each of the ten users. To me this seems like bfs?\nHowever, I'm not sure if that's the way to go about implementing the algorithm.\nThanks,","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1271,"Q_Id":4488783,"Users Score":0,"Answer":"I have around 300 friends in facebook and some of my friends also have 300 friends on an average. If you gonna build a graph out of it , it's gonna be huge . Correct me , if I am wrong ? . A BFS will be quit lot demanding in this scenario ?\nThanks\nJ","Q_Score":2,"Tags":"python,algorithm,social-networking,traversal,breadth-first-search","A_Id":6335027,"CreationDate":"2010-12-20T10:32:00.000","Title":"Python usage of breadth-first search on social graph","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed the official python bindings for OpenCv and I am implementing some standard textbook functions just to get used to the python syntax. I have run into the problem, however, that CvSize does not actually exist, even though it is documented on the site... \nThe simple function: blah = cv.CvSize(inp.width\/2, inp.height\/2) yields the error 'module' object has no attribute 'CvSize'. I have imported with 'import cv'.\nIs there an equivalent structure? Do I need something more? Thanks.","AnswerCount":4,"Available Count":3,"Score":-0.049958375,"is_accepted":false,"ViewCount":7873,"Q_Id":4516007,"Users Score":-1,"Answer":"Perhaps the documentation is wrong and you have to use cv.cvSize instead of cv.CvSize ?\nAlso, do a dir(cv) to find out the methods available to you.","Q_Score":7,"Tags":"python,opencv,computer-vision","A_Id":4516073,"CreationDate":"2010-12-23T05:06:00.000","Title":"CvSize does not exist?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed the official python bindings for OpenCv and I am implementing some standard textbook functions just to get used to the python syntax. I have run into the problem, however, that CvSize does not actually exist, even though it is documented on the site... \nThe simple function: blah = cv.CvSize(inp.width\/2, inp.height\/2) yields the error 'module' object has no attribute 'CvSize'. I have imported with 'import cv'.\nIs there an equivalent structure? Do I need something more? Thanks.","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":7873,"Q_Id":4516007,"Users Score":8,"Answer":"It seems that they opted to eventually avoid this structure altogether. Instead, it just uses a python tuple (width, height).","Q_Score":7,"Tags":"python,opencv,computer-vision","A_Id":6534684,"CreationDate":"2010-12-23T05:06:00.000","Title":"CvSize does not exist?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed the official python bindings for OpenCv and I am implementing some standard textbook functions just to get used to the python syntax. I have run into the problem, however, that CvSize does not actually exist, even though it is documented on the site... \nThe simple function: blah = cv.CvSize(inp.width\/2, inp.height\/2) yields the error 'module' object has no attribute 'CvSize'. I have imported with 'import cv'.\nIs there an equivalent structure? Do I need something more? Thanks.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":7873,"Q_Id":4516007,"Users Score":0,"Answer":"The right call is cv.cvSize(inp.width\/2, inp.height\/2). \nAll functions in the python opencv bindings start with a lowercased c even in the highgui module.","Q_Score":7,"Tags":"python,opencv,computer-vision","A_Id":5974122,"CreationDate":"2010-12-23T05:06:00.000","Title":"CvSize does not exist?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project built on NumPy, and I would like to take advantage of some of NumPy's optional architecture-specific optimizations. If I install NumPy on a paravirtualized Xen client OS (Ubuntu, in this case - a Linode), can I take advantage of those optimizations?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":99,"Q_Id":4523267,"Users Score":1,"Answer":"Yes. The optimizations run in userland and so shouldn't cause any PV traps.","Q_Score":1,"Tags":"python,ubuntu,numpy,virtualization,xen","A_Id":4523953,"CreationDate":"2010-12-23T23:38:00.000","Title":"NumPy Under Xen Client System","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the equivalent to someArray(:,1,1) in python from Matlab? \nIn python someArray[:][0][0] produces a different value","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":793,"Q_Id":4535359,"Users Score":5,"Answer":"someArray[:,0,0] is the Python NumPy equivalent of MATLAB's someArray(:,1,1). I've never figured out how to do it in pure Python, the colon slice operation is a total mystery to me with lists-of-lists.","Q_Score":4,"Tags":"python,arrays,matlab,syntax","A_Id":4535370,"CreationDate":"2010-12-26T20:47:00.000","Title":"Colon difference in Matlab and Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've written a simple k-means clustering code for Hadoop (two separate programs - mapper and reducer). The code is working over a small dataset of 2d points on my local box. It's written in Python and I plan to use Streaming API.\nI would like suggestions on how best to run this program on Hadoop.\nAfter each run of mapper and reducer, new centres are generated. These centres are input for the next iteration.\nFrom what I can see, each mapreduce iteration will have to be a separate mapreduce job. And it looks like I'll have to write another script (python\/bash) to extract the new centres from HDFS after each reduce phase, and feed it back to mapper. \nAny other easier, less messier way? If the cluster happens to use a fair scheduler, It will be very long before this computation completes?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":3152,"Q_Id":4537422,"Users Score":1,"Answer":"You needn't write another job. You can put the same job in a loop ( a while loop) and just keep changing the parameters of the job, so that when the mapper and reducer complete their processing, the control starts with creating a new configuration, and then you just automatically have an input file that is the output of the previous phase.","Q_Score":3,"Tags":"python,streaming,hadoop,mapreduce,iteration","A_Id":11489099,"CreationDate":"2010-12-27T08:18:00.000","Title":"Iterative MapReduce","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been working on this for a long time, and I feel very worn out; I'm hoping for an [obvious?] insight from SO community that might get my pet project back on the move, so I can stop kicking myself. I'm using Cloudera CDH3, HBase .89 and Hadoop .20.\nI have a Python\/Django app that writes data to a single HBase table using the Thrift interface, and that works great. Now I want to Map\/Reduce it into some more HBase tables. \nThe obvious answer here is either Dumbo or Apache PIG, but with Pig, the HBaseStorage adapter support isn't available for my version yet (Pig is able to load the classes and definitions, but freezes at the \"Map\" step, complaining about \"Input Splits\"; Pig mailing lists suggest this is fixed in Pig 0.8, which is incompatible with CDH3 Hadoop, so I'd have to use edge versions of everything [i think]). I can't find any information on how to make Dumbo use HBaseStorage as a data sink.\nI don't care if it's Python, Ruby, Scala, Clojure, Jython, JRuby or even PHP, I just really don't want to write Java (for lots of reasons, most of them involving the sinking feeling I get every time I have to convert an Int() to IntWritable() etc).\nI've tried literally every last solution and example I can find (for the last 4 weeks) for writing HBase Map\/Reduce jobs in alternative languages, but everything seems to be either outdated or incomplete. Please, Stack Overflow, save me from my own devices!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1322,"Q_Id":4557045,"Users Score":0,"Answer":"It's not precisely an answer, but it's the closest I got --\nI asked in #hbase on irc.freenode.net yesterday, and one of the Cloudera employees responded.\nThe \"Input Splits\" problem I'm having with Pig is specific to Pig 0.7, and Pig 0.8 will be bundled with Cloudera CDH3 Beta 4 (no ETA on that). Therefore, what I want to do (easily write M\/R jobs using HBase tables as both sink and source) will be possible in their next release. It also seems that the HBaseStorage class will be generally improved to help with read\/write operations from ANY JVM language, as well, making Jython, JRuby, Scala and Clojure all much more feasible as well.\nSo the answer to the question, at this time, is \"Wait for CDH3 Beta 4\", or if you're impatient, \"Download the latest version of Pig and pray that it's compatible with your HBase\"","Q_Score":1,"Tags":"python,hadoop,mapreduce,hbase","A_Id":4567653,"CreationDate":"2010-12-29T19:12:00.000","Title":"Easiest non-Java way to write HBase MapReduce on CDH3?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Here's a somewhat simplified example of what I am trying to do. \nSuppose I have a formula that computes credit points, but the formula has no constraints (for example, the score might be 1 to 5000). And a score is assigned to 100 people. \nNow, I want to assign a \"normalized\" score between 200 and 800 to each person, based on a bell curve. So for example, if one guy has 5000 points, he might get an 800 on the new scale. The people with the middle of my point range will get a score near 500. In other words, 500 is the median? \nA similar example might be the old scenario of \"grading on the curve\", where a the bulk of the students perhaps get a C or C+. \nI'm not asking for the code, either a library, an algorithm book or a website to refer to.... I'll probably be writing this in Python (but C# is of some interest as well). There is NO need to graph the bell curve. My data will probably be in a database and I may have even a million people to which to assign this score, so scalability is an issue. \nThanks.","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":4989,"Q_Id":4560554,"Users Score":4,"Answer":"The important property of the bell curve is that it describes normal distribution, which is a simple model for many natural phenomena. I am not sure what kind of \"normalization\" you intend to do, but it seems to me that current score already complies with normal distribution, you just need to determine its properties (mean and variance) and scale each result accordingly.","Q_Score":2,"Tags":"c#,python,algorithm","A_Id":4561924,"CreationDate":"2010-12-30T06:36:00.000","Title":"Bell Curve Gaussian Algorithm (Python and\/or C#)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can you suggest a module function from numpy\/scipy that can find local maxima\/minima in a 1D numpy array? Obviously the simplest approach ever is to have a look at the nearest neighbours, but I would like to have an accepted solution that is part of the numpy distro.","AnswerCount":12,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":290777,"Q_Id":4624970,"Users Score":27,"Answer":"Another approach (more words, less code) that may help:\nThe locations of local maxima and minima are also the locations of the zero crossings of the first derivative. It is generally much easier to find zero crossings than it is to directly find local maxima and minima.\nUnfortunately, the first derivative tends to \"amplify\" noise, so when significant noise is present in the original data, the first derivative is best used only after the original data has had some degree of smoothing applied.\nSince smoothing is, in the simplest sense, a low pass filter, the smoothing is often best (well, most easily) done by using a convolution kernel, and \"shaping\" that kernel can provide a surprising amount of feature-preserving\/enhancing capability. The process of finding an optimal kernel can be automated using a variety of means, but the best may be simple brute force (plenty fast for finding small kernels). A good kernel will (as intended) massively distort the original data, but it will NOT affect the location of the peaks\/valleys of interest.\nFortunately, quite often a suitable kernel can be created via a simple SWAG (\"educated guess\"). The width of the smoothing kernel should be a little wider than the widest expected \"interesting\" peak in the original data, and its shape will resemble that peak (a single-scaled wavelet). For mean-preserving kernels (what any good smoothing filter should be) the sum of the kernel elements should be precisely equal to 1.00, and the kernel should be symmetric about its center (meaning it will have an odd number of elements.\nGiven an optimal smoothing kernel (or a small number of kernels optimized for different data content), the degree of smoothing becomes a scaling factor for (the \"gain\" of) the convolution kernel.\nDetermining the \"correct\" (optimal) degree of smoothing (convolution kernel gain) can even be automated: Compare the standard deviation of the first derivative data with the standard deviation of the smoothed data. How the ratio of the two standard deviations changes with changes in the degree of smoothing cam be used to predict effective smoothing values. A few manual data runs (that are truly representative) should be all that's needed.\nAll the prior solutions posted above compute the first derivative, but they don't treat it as a statistical measure, nor do the above solutions attempt to performing feature preserving\/enhancing smoothing (to help subtle peaks \"leap above\" the noise).\nFinally, the bad news: Finding \"real\" peaks becomes a royal pain when the noise also has features that look like real peaks (overlapping bandwidth). The next more-complex solution is generally to use a longer convolution kernel (a \"wider kernel aperture\") that takes into account the relationship between adjacent \"real\" peaks (such as minimum or maximum rates for peak occurrence), or to use multiple convolution passes using kernels having different widths (but only if it is faster: it is a fundamental mathematical truth that linear convolutions performed in sequence can always be convolved together into a single convolution). But it is often far easier to first find a sequence of useful kernels (of varying widths) and convolve them together than it is to directly find the final kernel in a single step.\nHopefully this provides enough info to let Google (and perhaps a good stats text) fill in the gaps. I really wish I had the time to provide a worked example, or a link to one. If anyone comes across one online, please post it here!","Q_Score":152,"Tags":"python,numpy","A_Id":19825314,"CreationDate":"2011-01-07T11:22:00.000","Title":"Finding local maxima\/minima with Numpy in a 1D numpy array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here's the problem: I try to randomize n times a choice between two elements (let's say [0,1] -> 0 or 1), and my final list will have n\/2 [0] + n\/2 [1]. I tend to have this kind of result: [0 1 0 0 0 1 0 1 1 1 1 1 1 0 0, until n]: the problem is that I don't want to have serially 4 or 5 times the same number so often. I know that I could use a quasi randomisation procedure, but I don't know how to do so (I'm using Python).","AnswerCount":5,"Available Count":2,"Score":0.1586485043,"is_accepted":false,"ViewCount":588,"Q_Id":4630723,"Users Score":4,"Answer":"To guarantee that there will be the same number of zeros and ones you can generate a list containing n\/2 zeros and n\/2 ones and shuffle it with random.shuffle.\nFor small n, if you aren't happy that the result passes your acceptance criteria (e.g. not too many consecutive equal numbers), shuffle again. Be aware that doing this reduces the randomness of the result, not increases it.\nFor larger n it will take too long to find a result that passes your criteria using this method (because most results will fail). Instead you could generate elements one at a time with these rules:\n\nIf you already generated 4 ones in a row the next number must be zero and vice versa.\nOtherwise, if you need to generate x more ones and y more zeros, the chance of the next number being one is x\/(x+y).","Q_Score":1,"Tags":"python,random","A_Id":4630743,"CreationDate":"2011-01-07T22:07:00.000","Title":"Using Python for quasi randomization","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here's the problem: I try to randomize n times a choice between two elements (let's say [0,1] -> 0 or 1), and my final list will have n\/2 [0] + n\/2 [1]. I tend to have this kind of result: [0 1 0 0 0 1 0 1 1 1 1 1 1 0 0, until n]: the problem is that I don't want to have serially 4 or 5 times the same number so often. I know that I could use a quasi randomisation procedure, but I don't know how to do so (I'm using Python).","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":588,"Q_Id":4630723,"Users Score":1,"Answer":"Having 6 1's in a row isn't particularly improbable -- are you sure you're not getting what you want?\nThere's a simple Python interface for a uniformly distributed random number, is that what you're looking for?","Q_Score":1,"Tags":"python,random","A_Id":4630745,"CreationDate":"2011-01-07T22:07:00.000","Title":"Using Python for quasi randomization","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to implement 1024x1024 monochromatic grid , i need read data from any cell and insert rectangles with various dimensions, i have tried to make list in list ( and use it like 2d array ), what i have found is that list of booleans is slower than list of integers.... i have tried 1d list, and it was slower than 2d one, numpy is slower about 10 times that standard python list, fastest way that i have found is PIL and monochromatic bitmap used with \"load\" method, but i want it to run a lot faster, so i have tried to compile it with shedskin, but unfortunately there is no pil support there, do you know any way of implementing such grid faster without rewriting it to c or c++ ?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":651,"Q_Id":4637190,"Users Score":0,"Answer":"One thing I might suggest is using Python's built-in array class (http:\/\/docs.python.org\/library\/array.html), with a type of 'B'. Coding will be simplest if you use one byte per pixel, but if you want to save memory, you can pack 8 to a byte, and access using your own bit manipulation.","Q_Score":3,"Tags":"python,bitmap,performance","A_Id":4637207,"CreationDate":"2011-01-09T01:31:00.000","Title":"Python Fast monochromatic bitmap","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to implement 1024x1024 monochromatic grid , i need read data from any cell and insert rectangles with various dimensions, i have tried to make list in list ( and use it like 2d array ), what i have found is that list of booleans is slower than list of integers.... i have tried 1d list, and it was slower than 2d one, numpy is slower about 10 times that standard python list, fastest way that i have found is PIL and monochromatic bitmap used with \"load\" method, but i want it to run a lot faster, so i have tried to compile it with shedskin, but unfortunately there is no pil support there, do you know any way of implementing such grid faster without rewriting it to c or c++ ?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":651,"Q_Id":4637190,"Users Score":2,"Answer":"Raph's suggestin of using array is good, but it won't help on CPython, in fact I'd expect it to be 10-15% slower, however if you use it on PyPy (http:\/\/pypy.org\/) I'd expect excellent results.","Q_Score":3,"Tags":"python,bitmap,performance","A_Id":4637284,"CreationDate":"2011-01-09T01:31:00.000","Title":"Python Fast monochromatic bitmap","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to embed matplotlib charts into PDFs generated by ReportLab directly - i.e. not saving as a PNG first and then embedding the PNG into the PDF (i think I'll get better quality output).\nDoes anyone know if there's a matplotlib flowable for ReportLab?\nThanks","AnswerCount":4,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":10096,"Q_Id":4690585,"Users Score":4,"Answer":"There is not one, but what I do in my own use of MatPlotLib with ReportLab is to generate PNGs and then embed the PNGs so that I don't need to also use PIL. However, if you do use PIL, I believe you should be able to generate and embed EPS using MatPlotLib and ReportLab.","Q_Score":16,"Tags":"python,matplotlib,reportlab","A_Id":4880177,"CreationDate":"2011-01-14T11:32:00.000","Title":"Is there a matplotlib flowable for ReportLab?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have a NxN matrix M (lil_matrix or csr_matrix) from scipy.sparse, and I want to make it (N+1)xN where M_modified[i,j] = M[i,j] for 0 <= i < N (and all j) and M[N,j] = 0 for all j. Basically, I want to add a row of zeros to the bottom of M and preserve the remainder of the matrix. Is there a way to do this without copying the data?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27075,"Q_Id":4695337,"Users Score":7,"Answer":"I don't think that there is any way to really escape from doing the copying. Both of those types of sparse matrices store their data as Numpy arrays (in the data and indices attributes for csr and in the data and rows attributes for lil) internally and Numpy arrays can't be extended.\nUpdate with more information:\nLIL does stand for LInked List, but the current implementation doesn't quite live up to the name. The Numpy arrays used for data and rows are both of type object. Each of the objects in these arrays are actually Python lists (an empty list when all values are zero in a row). Python lists aren't exactly linked lists, but they are kind of close and quite frankly a better choice due to O(1) look-up. Personally, I don't immediately see the point of using a Numpy array of objects here rather than just a Python list. You could fairly easily change the current lil implementation to use Python lists instead which would allow you to add a row without copying the whole matrix.","Q_Score":31,"Tags":"python,scipy,sparse-matrix","A_Id":4695834,"CreationDate":"2011-01-14T19:58:00.000","Title":"expanding (adding a row or column) a scipy.sparse matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a series of 20 plots (not subplots) to be made in a single figure. I want the legend to be outside of the box. At the same time, I do not want to change the axes, as the size of the figure gets reduced. Kindly help me for the following queries:\n\nI want to keep the legend box outside the plot area. (I want the legend to be outside at the right side of the plot area).\nIs there anyway that I reduce the font size of the text inside the legend box, so that the size of the legend box will be small.","AnswerCount":17,"Available Count":1,"Score":0.0235250705,"is_accepted":false,"ViewCount":1360273,"Q_Id":4700614,"Users Score":2,"Answer":"You can also try figlegend. It is possible to create a legend independent of any Axes object. However, you may need to create some \"dummy\" Paths to make sure the formatting for the objects gets passed on correctly.","Q_Score":1378,"Tags":"python,matplotlib,legend","A_Id":4710783,"CreationDate":"2011-01-15T16:10:00.000","Title":"How to put the legend outside the plot in Matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to embark on some signal processing work using NumPy\/SciPy. However, I have never used Python before and don't know where to start.\nI see there are currently two branches of Python in this world: Version 2.x and 3.x. \nBeing a neophile, I instinctively tend to go for the newer one, but there seems to be a lot of talk about incompatibilities between the two. Numpy seems to be compatible with Python 3. I can't find any documents on SciPy.\nWould you recommend to go with Python 3 or 2?\n(could you point me to some resources to get started? I know C\/C++, Ruby, Matlab and some other stuff and basically want to use NumPy instead of Matlab.)","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":5690,"Q_Id":4758693,"Users Score":2,"Answer":"I am quite conservative in this respect, and so I use Python 2.6. That's what comes pre-installed on my Linux box, and it is also the target version for the latest binary releases of SciPy.\nPython 3 is without a doubt a huge step forward, but if you do mainly numerical stuff with NumPy and SciPy, I'd still go for Python 2.","Q_Score":8,"Tags":"python,numpy,python-3.x","A_Id":4758849,"CreationDate":"2011-01-21T12:17:00.000","Title":"I want to use NumPy\/SciPy. Should I use Python 2 or 3?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to embark on some signal processing work using NumPy\/SciPy. However, I have never used Python before and don't know where to start.\nI see there are currently two branches of Python in this world: Version 2.x and 3.x. \nBeing a neophile, I instinctively tend to go for the newer one, but there seems to be a lot of talk about incompatibilities between the two. Numpy seems to be compatible with Python 3. I can't find any documents on SciPy.\nWould you recommend to go with Python 3 or 2?\n(could you point me to some resources to get started? I know C\/C++, Ruby, Matlab and some other stuff and basically want to use NumPy instead of Matlab.)","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":5690,"Q_Id":4758693,"Users Score":2,"Answer":"I can recommend Using py3k over py2.6 if possible. Especially if you're a new user, since some of the syntax changes in py3k and it'll be harder to get used the new syntax if you're starting out learning the old.\nThe modules you mention all have support for py3k but as SilentGhost noted you might want to check for compatibility with plotting libraries too.","Q_Score":8,"Tags":"python,numpy,python-3.x","A_Id":4758906,"CreationDate":"2011-01-21T12:17:00.000","Title":"I want to use NumPy\/SciPy. Should I use Python 2 or 3?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to embark on some signal processing work using NumPy\/SciPy. However, I have never used Python before and don't know where to start.\nI see there are currently two branches of Python in this world: Version 2.x and 3.x. \nBeing a neophile, I instinctively tend to go for the newer one, but there seems to be a lot of talk about incompatibilities between the two. Numpy seems to be compatible with Python 3. I can't find any documents on SciPy.\nWould you recommend to go with Python 3 or 2?\n(could you point me to some resources to get started? I know C\/C++, Ruby, Matlab and some other stuff and basically want to use NumPy instead of Matlab.)","AnswerCount":6,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":5690,"Q_Id":4758693,"Users Score":3,"Answer":"Both scipy and numpy are compatible with py3k. However, if you'll need to plot stuff: matplotlib is not yet officially compatible with py3k. So, it'll depend on whether your signalling processing involves plotting.\nSyntactic differences are not that great between the two version.","Q_Score":8,"Tags":"python,numpy,python-3.x","A_Id":4758785,"CreationDate":"2011-01-21T12:17:00.000","Title":"I want to use NumPy\/SciPy. Should I use Python 2 or 3?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a pretty heavy duty math-based project, which will parse through about 100MB+ data several times a day, so, I need a fast language that's pretty easy to use. I would have gone with C, but, getting a large project done in C is very difficult, especially with the low level programming getting in your way. So, I was about python or java. Both are well equiped with OO features, so I don't mind that. Now, here are my pros for choosing python:\n\nVery easy to use language\nHas a pretty large library of useful stuff\nHas an easy to use plotting library\n\nHere are the cons:\n\nNot exactly blazing\nThere isn't a native python neural network library that is active\nI can't close source my code without going through quite a bit of trouble\nDeploying python code on clients computers is hard to deal with, especially when clients are idiots.\n\nHere are the pros for choosing Java:\n\nHuge library\nWell supported\nEasy to deploy\nPretty fast, possibly even comparable to C++\nThe Encog Neural Network Library is really active and pretty awesome\nNetworking support is really good\nStrong typing\n\nHere are the cons for Java:\n\nI can't find a good graphing library like matplotlib for python\nNo built in support for big integers, that means another dependency (I mean REALLY big integers, not just math.BigInteger size)\nFile IO is kind of awkward compared to Python\nNot a ton of array manipulating or \"make programming easy\" type of features that python has.\n\nSo, I was hoping you guys can tell me what to use. I'm equally familiar with both languages. Also, suggestions for other languages is great too.\nEDIT: WOW! you guys are fast! 30 mins at 10 responses!","AnswerCount":8,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":12225,"Q_Id":4759485,"Users Score":0,"Answer":"What is more important for you?\nIf it's rapid application development, I found Python significantly easier to code for than Java - and I was just learning Python, while I had been coding on Java for years.\nIf it's application speed and the ability to reuse existing code, then you should probably stick with Java. It's reasonably fast and many research efforts at the moment use Java as their language of choice.","Q_Score":21,"Tags":"java,python,math,stocks","A_Id":4759549,"CreationDate":"2011-01-21T13:47:00.000","Title":"Java or Python for math?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a pretty heavy duty math-based project, which will parse through about 100MB+ data several times a day, so, I need a fast language that's pretty easy to use. I would have gone with C, but, getting a large project done in C is very difficult, especially with the low level programming getting in your way. So, I was about python or java. Both are well equiped with OO features, so I don't mind that. Now, here are my pros for choosing python:\n\nVery easy to use language\nHas a pretty large library of useful stuff\nHas an easy to use plotting library\n\nHere are the cons:\n\nNot exactly blazing\nThere isn't a native python neural network library that is active\nI can't close source my code without going through quite a bit of trouble\nDeploying python code on clients computers is hard to deal with, especially when clients are idiots.\n\nHere are the pros for choosing Java:\n\nHuge library\nWell supported\nEasy to deploy\nPretty fast, possibly even comparable to C++\nThe Encog Neural Network Library is really active and pretty awesome\nNetworking support is really good\nStrong typing\n\nHere are the cons for Java:\n\nI can't find a good graphing library like matplotlib for python\nNo built in support for big integers, that means another dependency (I mean REALLY big integers, not just math.BigInteger size)\nFile IO is kind of awkward compared to Python\nNot a ton of array manipulating or \"make programming easy\" type of features that python has.\n\nSo, I was hoping you guys can tell me what to use. I'm equally familiar with both languages. Also, suggestions for other languages is great too.\nEDIT: WOW! you guys are fast! 30 mins at 10 responses!","AnswerCount":8,"Available Count":3,"Score":0.024994793,"is_accepted":false,"ViewCount":12225,"Q_Id":4759485,"Users Score":1,"Answer":"If those are the choices, then Java should be the faster for math intensive work. It is compiled (although yes it is still running byte code).\nExelian mentions NumPy. There's also the SciPy package. Both are worth looking at but only really seem to give speed improvements for work with lots of arrays and vector processing.\nWhen I tried using these with NLTK for a math-intensive routine, I found there wasn't that much of a speedup.\nFor math intensive work these days, I'd be using C\/C++ or C# (personally I prefer C# over Java although that shouldn't affect your decision). My first employer out of univ. paid me to use Fortran for stuff that is almost certainly more math intensive than anything you're thinking of. Don't laugh - the Fortran compilers are some of the best for math processing on heavy iron.","Q_Score":21,"Tags":"java,python,math,stocks","A_Id":4759596,"CreationDate":"2011-01-21T13:47:00.000","Title":"Java or Python for math?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a pretty heavy duty math-based project, which will parse through about 100MB+ data several times a day, so, I need a fast language that's pretty easy to use. I would have gone with C, but, getting a large project done in C is very difficult, especially with the low level programming getting in your way. So, I was about python or java. Both are well equiped with OO features, so I don't mind that. Now, here are my pros for choosing python:\n\nVery easy to use language\nHas a pretty large library of useful stuff\nHas an easy to use plotting library\n\nHere are the cons:\n\nNot exactly blazing\nThere isn't a native python neural network library that is active\nI can't close source my code without going through quite a bit of trouble\nDeploying python code on clients computers is hard to deal with, especially when clients are idiots.\n\nHere are the pros for choosing Java:\n\nHuge library\nWell supported\nEasy to deploy\nPretty fast, possibly even comparable to C++\nThe Encog Neural Network Library is really active and pretty awesome\nNetworking support is really good\nStrong typing\n\nHere are the cons for Java:\n\nI can't find a good graphing library like matplotlib for python\nNo built in support for big integers, that means another dependency (I mean REALLY big integers, not just math.BigInteger size)\nFile IO is kind of awkward compared to Python\nNot a ton of array manipulating or \"make programming easy\" type of features that python has.\n\nSo, I was hoping you guys can tell me what to use. I'm equally familiar with both languages. Also, suggestions for other languages is great too.\nEDIT: WOW! you guys are fast! 30 mins at 10 responses!","AnswerCount":8,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":12225,"Q_Id":4759485,"Users Score":0,"Answer":"The Apache Commons Math picked up where JAMA left off. They are quite capable for scientific computing. \nSo is Python - NumPy and SciPy are excellent. I also like the fact that Python is a hybrid of object-orientation and functional programming. Functional programming is awfully handy for numerical methods.\nI'd recommend using the one that you know best, but if the choice is a toss up I might lean towards Python.","Q_Score":21,"Tags":"java,python,math,stocks","A_Id":4765700,"CreationDate":"2011-01-21T13:47:00.000","Title":"Java or Python for math?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Ubuntu 10.04 and have successfully configured PyDev to work with Python and have written a few simple example projects. Now I am trying to incorporate numpy and matplotlib. I have gotten numpy installed and within PyDev I did not need to alter any paths, etc., and after the installation of numpy I was automatically able to import numpy with no problem. However, following the same procedure with matplotlib hasn't worked. If I run Python from the command line, then import matplotlib works just fine. But within PyDev, I just get the standard error where it can't locate matplotlib when I try import matplotlib.\nSince numpy didn't require any alteration of the PYTHONPATH, I feel that neither should matplotlib, so can anyone help me figure out why matplotlib isn't accessible from within my existing project while numpy is? Thanks for any help.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3642,"Q_Id":4788748,"Users Score":0,"Answer":"Right click your project, then go to properties, then click PyDev - Interpreter\/Grammar, click \"Click here to configure an interpreter not listed\". Then select the interpreter you are using, click Install\/Uninstall with pip, then enter matplotlib for . Then restart Eclipse and it should work.","Q_Score":0,"Tags":"python,module,import,matplotlib,pydev","A_Id":52617040,"CreationDate":"2011-01-25T00:22:00.000","Title":"Unable to import matplotlib in PyDev","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Ubuntu 10.04 and have successfully configured PyDev to work with Python and have written a few simple example projects. Now I am trying to incorporate numpy and matplotlib. I have gotten numpy installed and within PyDev I did not need to alter any paths, etc., and after the installation of numpy I was automatically able to import numpy with no problem. However, following the same procedure with matplotlib hasn't worked. If I run Python from the command line, then import matplotlib works just fine. But within PyDev, I just get the standard error where it can't locate matplotlib when I try import matplotlib.\nSince numpy didn't require any alteration of the PYTHONPATH, I feel that neither should matplotlib, so can anyone help me figure out why matplotlib isn't accessible from within my existing project while numpy is? Thanks for any help.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":3642,"Q_Id":4788748,"Users Score":1,"Answer":"I added numpy to the Forced Builtins and worked like charm.","Q_Score":0,"Tags":"python,module,import,matplotlib,pydev","A_Id":6766298,"CreationDate":"2011-01-25T00:22:00.000","Title":"Unable to import matplotlib in PyDev","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Ubuntu 10.04 and have successfully configured PyDev to work with Python and have written a few simple example projects. Now I am trying to incorporate numpy and matplotlib. I have gotten numpy installed and within PyDev I did not need to alter any paths, etc., and after the installation of numpy I was automatically able to import numpy with no problem. However, following the same procedure with matplotlib hasn't worked. If I run Python from the command line, then import matplotlib works just fine. But within PyDev, I just get the standard error where it can't locate matplotlib when I try import matplotlib.\nSince numpy didn't require any alteration of the PYTHONPATH, I feel that neither should matplotlib, so can anyone help me figure out why matplotlib isn't accessible from within my existing project while numpy is? Thanks for any help.","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":3642,"Q_Id":4788748,"Users Score":2,"Answer":"Sounds like the interpreter you setup for Pydev is not pointing to the appropriate version of python (that you've install mpl and np). In the terminal, it's likely the effect of typing python is tantamount to env python; pydev might not be using this interpreter. \nBut, if the pydev interpreter is pointed to the right location, you might simply have to rehash the interpreter (basically, set it up again) to have mpl show up.\nYou could try this in the terminal and see if the results are different:\npython -c 'import platform; print platform.python_version()'\n${PYTHONPATH}\/python -c 'import platform; print platform.python_version()'","Q_Score":0,"Tags":"python,module,import,matplotlib,pydev","A_Id":4999217,"CreationDate":"2011-01-25T00:22:00.000","Title":"Unable to import matplotlib in PyDev","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We need a content classification module. Bayesian classifier seems to be what I am looking for. Should we go for Orange or NLTK ?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":3710,"Q_Id":4789318,"Users Score":1,"Answer":"NLTK is a toolkit that supports a four state model of natural language processing:\n\nTokenizing: grouping characters as words. This ranges from trivial regex stuff to dealing with contractions like \"can't\"\nTagging. This is applying part-of-speech tags to the tokens (eg \"NN\" for noun, \"VBG\" for verb gerund). This is typically done by training a model (eg Hidden Markov) on a training corpus (i.e. large list of by by hand tagged sentences).\nChunking\/Parsing. This is taking each tagged sentence and extracting features into a tree (eg noun phrases). This can be according to a hand-written grammar or a one trained on a corpus.\nInformation extraction. This is traversing the tree and extracting the data. This is where your specific orange=fruit would be done.\n\nNLTK supports WordNet, a huge semantic dictionary that classifies words. So there are 5 noun definitions for orange (fruit, tree, pigment, color, river in South Africa). Each of these has one or more 'hypernym paths' that are hierarchies of classifications. E.g. the first sense of 'orange' has a two paths:\n\norange\/citrus\/edible_fruit\/fruit\/reproductive_structure\/plant_organ\/plant_part\/natural_object\/whole\/object\/physical_entity\/entity\n\nand\n\norange\/citrus\/edible_fruit\/produce\/food\/solid\/matter\/physical_entity\/entity\n\nDepending on your application domain you can identify orange as a fruit, or a food, or a plant thing. Then you can use the chunked tree structure to determine more (who did what to the fruit, etc.)","Q_Score":11,"Tags":"python,machine-learning,nltk,naivebayes,orange","A_Id":10869425,"CreationDate":"2011-01-25T02:07:00.000","Title":"Orange vs NLTK for Content Classification in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We need a content classification module. Bayesian classifier seems to be what I am looking for. Should we go for Orange or NLTK ?","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":3710,"Q_Id":4789318,"Users Score":3,"Answer":"I don't know Orange, but +1 for NLTK:\nI've successively used the classification tools in NLTK to classify text and related meta data. Bayesian is the default but there are other alternatives such as Maximum Entropy. Also being a toolkit, you can customize as you see fit - eg. creating your own features (which is what I did for the meta data).\nNLTK also has a couple of good books - one of which is available under Creative Commons (as well as O'Reilly).","Q_Score":11,"Tags":"python,machine-learning,nltk,naivebayes,orange","A_Id":4789820,"CreationDate":"2011-01-25T02:07:00.000","Title":"Orange vs NLTK for Content Classification in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to parse CSV data in Python when the data is not in a file? I'm storing CSV data in my database and I'd like to parse it. I'm looking for something analogous to Ruby's CSV.parse. I know Python has a CSV class but everything I've seen in the docs seems to deal with files as opposed to in-memory CSV data.\n(And it's not an option to parse the data before it goes into the database.)\n(And please don't tell me not to store the CSV data in the database. I know what I'm doing as far as the database goes.)","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":6225,"Q_Id":4855523,"Users Score":1,"Answer":"Use the stringio module, which allows you to dress strings as file-like objects. That way you can pass a stringio \"file\" to the CSV module for parsing (or any other parser you may be using).","Q_Score":7,"Tags":"python,csv","A_Id":4855557,"CreationDate":"2011-01-31T20:07:00.000","Title":"Parsing CSV data from memory in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to parse a text and categorize the sentences according to their grammatical structure, but I have a very small understanding of NLP so I don't even know where to start.\nAs far as I have read, I need to parse the text and find out (or tag?) the part-of-speech of every word. Then I search for the verb clause or whatever other defining characteristic I want to use to categorize the sentences.\nWhat I don't know is if there is already some method to do this more easily or if I need to define the grammar rules separately or what.\nAny resources on NLP that discuss this would be great. Program examples are welcome as well. I have used NLTK before, but not extensively. Other parsers or languages are OK too!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1212,"Q_Id":4895661,"Users Score":0,"Answer":"May be you need simply define patterns like \"noun verb noun\" etc for each type of grammatical structure and search matches in part-of-speach tagger output sequence.","Q_Score":2,"Tags":"python,nlp,grammar,nltk","A_Id":4895933,"CreationDate":"2011-02-04T07:45:00.000","Title":"How would I go about categorizing sentences according to tense (present, past, future, etc.)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project to classify snippets of text using the python nltk module and the naivebayes classifier. I am able to train on corpus data and classify another set of data but would like to feed additional training information into the classifier after initial training.\nIf I'm not mistaken, there doesn't appear to be a way to do this, in that the NaiveBayesClassifier.train method takes a complete set of training data. Is there a way to add to the the training data without feeding in the original featureset?\nI'm open to suggestions including other classifiers that can accept new training data over time.","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":3608,"Q_Id":4905368,"Users Score":8,"Answer":"There's 2 options that I know of:\n1) Periodically retrain the classifier on the new data. You'd accumulate new training data in a corpus (that already contains the original training data), then every few hours, retrain & reload the classifier. This is probably the simplest solution.\n2) Externalize the internal model, then update it manually. The NaiveBayesClassifier can be created directly by giving it a label_prodist and a feature_probdist. You could create these separately, pass them in to a NaiveBayesClassifier, then update them whenever new data comes in. The classifier would use this new data immediately. You'd have to look at the train method for details on how to update the probability distributions.","Q_Score":17,"Tags":"python,nltk","A_Id":4908925,"CreationDate":"2011-02-05T05:50:00.000","Title":"How to incrementally train an nltk classifier","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to create a multi-dimensional array of different types in Python? The way I usually solve it is [([None] * n) for i in xrange(m)], but I don't want to use list. I want something which is really a continuous array of pointers in memory, not a list. (Each list itself is continuous, but when you make a list of lists, the different lists may be spread in different places in RAM.)\nAlso, writing [([None] * n) for i in xrange(m)] is quite a convoluted way of initializing an empty array, in contrast to something like empty_array(m, n). Is there a better alternative?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":5249,"Q_Id":4909748,"Users Score":2,"Answer":"In many cases such arrays are not required as there are more elegant solutions to these problems. Explain what you want to do so someone can give some hints.\nAnyway, if you really, really need such data structure, use array.array.","Q_Score":5,"Tags":"python,arrays","A_Id":4909786,"CreationDate":"2011-02-05T21:23:00.000","Title":"Python: Multi-dimensional array of different types","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got 2 questions on analyzing a GPS dataset.\n1) Extracting trajectories I have a huge database of recorded GPS coordinates of the form (latitude, longitude, date-time). According to date-time values of consecutive records, I'm trying to extract all trajectories\/paths followed by the person. For instance; say from time M, the (x,y) pairs are continuously changing up until time N. After N, the change in (x,y) pairs decrease, at which point I conclude that the path taken from time M to N can be called a trajectory. Is that a decent approach to follow when extracting trajectories? Are there any well-known approaches\/methods\/algorithms you can suggest? Are there any data structures or formats you would like to suggest me to maintain those points in an efficient manner? Perhaps, for each trajectory, figuring out the velocity and acceleration would be useful?\n2) Mining the trajectories Once I have all the trajectories followed\/paths taken, how can I compare\/cluster them? I would like to know if the start or end points are similar, then how do the intermediate paths compare? \nHow do I compare the 2 paths\/routes and conclude if they are similar or not. Furthermore; how do I cluster similar paths together?\nI would highly appreciate it if you can point me to a research or something similar on this matter.\nThe development will be in Python, but all kinds of library suggestions are welcome.\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":6451,"Q_Id":4910510,"Users Score":1,"Answer":"1) Extracting trajectories \nI think you are in right direction. There are probably will be some noise in gps data, and random walking, you should do some smooth like splines to overcome it. \n\n2) Mining the trajectories\nIs there are any business sense in similar trajectories? (This will help build distance metric and then you can use some of mahoot clustering algorithms)\n 1. I think point where some person stoped are more interesting so you can generate statistics for popularity of places.\n 2. If you need route similarity to find different paths to same start-end you need to cluster first start end location and then similare curves by (maximum distance beetween, integral distance - some of well known functional metrics)","Q_Score":10,"Tags":"python,algorithm,gps,gis,data-mining","A_Id":4919289,"CreationDate":"2011-02-05T23:48:00.000","Title":"Comparing\/Clustering Trajectories (GPS data of (x,y) points) and Mining the data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to extract the text information contained in a postscript image file (the captions to my axis labels).\nThese images were generated with pgplot. I have tried ps2ascii and ps2txt on Ubuntu but they didn't produce any useful results. Does anyone know of another method?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1410,"Q_Id":4934669,"Users Score":6,"Answer":"It's likely that pgplot drew the fonts in the text directly with lines rather than using text. Especially since pgplot is designed to output to a huge range of devices including plotters where you would have to do this.\nEdit:\nIf you have enough plots to be worth\n the effort than it's a very simple\n image processing task. Convert each\n page to something like tiff, in mono\n chrome Threshold the image to binary, \n the text will be max pixel value. \nUse a template matching technique.\n If you have a limited set of\n possible labels then just match the\n entire label, you can even start\n with a template of the correct size\n and rotation. Then just flag each\n plot as containing label[1-n], no\n need to read the actual text. \nIf you\n don't know the label then you can\n still do OCR fairly easily, just\n extract the region around the axis,\n rotate it for the vertical - and use\n Google's free OCR lib\nIf you have pgplot you can even\n build the training set for OCR or\n the template images directly rather\n than having to harvest them from the\n image list","Q_Score":3,"Tags":"python,image,text,postscript","A_Id":4934824,"CreationDate":"2011-02-08T15:03:00.000","Title":"Is there a way to extract text information from a postscript file? (.ps .eps)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a program in Python. The first thing that happens is a window is displayed (I'm using wxPython) that has some buttons and text. When the user performs some actions, a plot is displayed in its own window. This plot is made with R, using rpy2. The problem is that the plot usually pops up on top of the main window, so the user has to move the plot to see the main window again. This is a big problem for the user, because he's lazy and good-for-nothing. He wants the plot to simply appear somewhere else, so he can see the main window and the plot at the same time, without having to lift a finger.\nTwo potential solutions to my problem are:\n(1) display the plot within a wxPython frame (which I think I could control the location of), or\n(2) be able to specify where on the screen the plot window appears.\nI can't figure out how to do either.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":757,"Q_Id":4938556,"Users Score":5,"Answer":"Plot to a graphics file using jpeg(), png() or another device, then display that file on your wxWidget.","Q_Score":3,"Tags":"python,r,plot,rpy2","A_Id":4938599,"CreationDate":"2011-02-08T21:17:00.000","Title":"How do I control where an R plot is displayed, using python and rpy2?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can draw matplotlib graph in command line(shell) environment, but I find that I could not draw the same graph inside the eclipse IDE. such as plot([1,2,3]) not show in eclipse, I writed show() in the end but still not show anything \nmy matplotlib use GTKAgg as backend, I use Pydev as plugin of eclipse to develop python.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3911,"Q_Id":4941162,"Users Score":1,"Answer":"import matplotlib.pyplot as mp\nmp.ion()\nmp.plot(x,y)\nmp.show()","Q_Score":1,"Tags":"python,eclipse,matplotlib","A_Id":5640511,"CreationDate":"2011-02-09T03:56:00.000","Title":"How to draw matplotlib graph in eclipse?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to implement Dijkstra's Algorithm in Python. However, I have to use a 2D array to hold three pieces of information - predecessor, length and unvisited\/visited.\nI know in C a Struct can be used, though I am stuck on how I can do a similar thing in Python, I am told it's possible but I have no idea to be honest","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":5571,"Q_Id":4962202,"Users Score":1,"Answer":"Encapsulate that information in a Python object and you should be fine.","Q_Score":0,"Tags":"python,dijkstra","A_Id":4962223,"CreationDate":"2011-02-10T20:21:00.000","Title":"Python - Dijkstra's Algorithm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to implement Dijkstra's Algorithm in Python. However, I have to use a 2D array to hold three pieces of information - predecessor, length and unvisited\/visited.\nI know in C a Struct can be used, though I am stuck on how I can do a similar thing in Python, I am told it's possible but I have no idea to be honest","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":5571,"Q_Id":4962202,"Users Score":0,"Answer":"Python is object oriented language. So think of it like moving from Structs in C to Classes of C++. You can use the same class structure in Python as well.","Q_Score":0,"Tags":"python,dijkstra","A_Id":4962291,"CreationDate":"2011-02-10T20:21:00.000","Title":"Python - Dijkstra's Algorithm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am installing Python 2.7 in addition to 2.7. When installing PyTables again for 2.7, I get this error -\n\nFound numpy 1.5.1 package installed.\n.. ERROR:: Could not find a local HDF5 installation.\nYou may need to explicitly state where your local HDF5 headers and\nlibrary can be found by setting the HDF5_DIR environment\nvariable or by using the --hdf5 command-line option.\n\nI am not clear on the HDF installation. I downloaded again - and copied it into a \/usr\/local\/hdf5 directory. And tried to set the environement vars as suggested in the PyTable install. Has anyone else had this problem that could help?","AnswerCount":5,"Available Count":4,"Score":0.0798297691,"is_accepted":false,"ViewCount":6903,"Q_Id":4972079,"Users Score":2,"Answer":"Do the following steps:\n\nbrew tap homebrew\/science\nbrew install hdf5\nsee where hdf5 is installed, it shows at the end of second step\nexport HDF5_DIR=\/usr\/local\/Cellar\/hdf5\/1.8.16_1\/ (Depending on the location that is installed on your computer)\nThis one worked for me on MAC :-)","Q_Score":7,"Tags":"python,hdf5,pytables","A_Id":38577661,"CreationDate":"2011-02-11T17:23:00.000","Title":"Unable to reinstall PyTables for Python 2.7","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am installing Python 2.7 in addition to 2.7. When installing PyTables again for 2.7, I get this error -\n\nFound numpy 1.5.1 package installed.\n.. ERROR:: Could not find a local HDF5 installation.\nYou may need to explicitly state where your local HDF5 headers and\nlibrary can be found by setting the HDF5_DIR environment\nvariable or by using the --hdf5 command-line option.\n\nI am not clear on the HDF installation. I downloaded again - and copied it into a \/usr\/local\/hdf5 directory. And tried to set the environement vars as suggested in the PyTable install. Has anyone else had this problem that could help?","AnswerCount":5,"Available Count":4,"Score":0.0798297691,"is_accepted":false,"ViewCount":6903,"Q_Id":4972079,"Users Score":2,"Answer":"I had to install libhdf5-8 and libhdf5-serial-dev first.\nThen, for me, the command on Ubuntu was:\nexport HDF5_DIR=\/usr\/lib\/x86_64-linux-gnu\/hdf5\/serial\/","Q_Score":7,"Tags":"python,hdf5,pytables","A_Id":29871380,"CreationDate":"2011-02-11T17:23:00.000","Title":"Unable to reinstall PyTables for Python 2.7","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am installing Python 2.7 in addition to 2.7. When installing PyTables again for 2.7, I get this error -\n\nFound numpy 1.5.1 package installed.\n.. ERROR:: Could not find a local HDF5 installation.\nYou may need to explicitly state where your local HDF5 headers and\nlibrary can be found by setting the HDF5_DIR environment\nvariable or by using the --hdf5 command-line option.\n\nI am not clear on the HDF installation. I downloaded again - and copied it into a \/usr\/local\/hdf5 directory. And tried to set the environement vars as suggested in the PyTable install. Has anyone else had this problem that could help?","AnswerCount":5,"Available Count":4,"Score":0.1586485043,"is_accepted":false,"ViewCount":6903,"Q_Id":4972079,"Users Score":4,"Answer":"My HDF5 was installed with homebrew, so setting the environment variable as follows worked for me: HDF5_DIR=\/usr\/local\/Cellar\/hdf5\/1.8.9","Q_Score":7,"Tags":"python,hdf5,pytables","A_Id":13868984,"CreationDate":"2011-02-11T17:23:00.000","Title":"Unable to reinstall PyTables for Python 2.7","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am installing Python 2.7 in addition to 2.7. When installing PyTables again for 2.7, I get this error -\n\nFound numpy 1.5.1 package installed.\n.. ERROR:: Could not find a local HDF5 installation.\nYou may need to explicitly state where your local HDF5 headers and\nlibrary can be found by setting the HDF5_DIR environment\nvariable or by using the --hdf5 command-line option.\n\nI am not clear on the HDF installation. I downloaded again - and copied it into a \/usr\/local\/hdf5 directory. And tried to set the environement vars as suggested in the PyTable install. Has anyone else had this problem that could help?","AnswerCount":5,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":6903,"Q_Id":4972079,"Users Score":4,"Answer":"The hdf5 command line option was not stated correctly ( --hdf5='\/usr\/local\/hdf5' ). Sprinkling print statements in the setup.py made it easier to pin down the problem.","Q_Score":7,"Tags":"python,hdf5,pytables","A_Id":4992253,"CreationDate":"2011-02-11T17:23:00.000","Title":"Unable to reinstall PyTables for Python 2.7","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"which one is faster:\nUsing Lattice Multiplication with threads(big numbers) OR\nUsing common Multiplication with threads(big numbers)\nDo you know any source code, to test them?\n-----------------EDIT------------------\nThe theads should be implemented in C, or Java for testing","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":519,"Q_Id":4988830,"Users Score":1,"Answer":"If I understand you correctly, \"lattice multiplication\" is different way of doing base-10 multiplication by hand that is supposed to be easier for kids to understand than the classic way. I assume \"common multiplication\" is the classic way.\nSo really, I think that the best answer is:\n\nNeither \"lattice multiplication\" or \"common multiplication\" are good (efficient) ways of doing multiplication on a computer. For small numbers (up to 2**64), built-in hardware multiplication is better. For large numbers, you are best of breaking the numbers into 8 or 32 bit chunks ...\nMulti-threading is unlikely to speed up multiplication unless you have very large numbers. The inherent cost of creating (or recycling) a thread is likely to swamp any theoretical speedup for smaller numbers. And for larger numbers (and larger numbers of threads) you need to worry about the bandwidth of copying the data around.\n\nNote there is a bit of material around on parallel multiplication (Google), but it is mostly in the academic literature ... which maybe says something about how practical it really is for the kind of hardware used today for low and high end computing.","Q_Score":0,"Tags":"java,python,c,multiplication","A_Id":4989194,"CreationDate":"2011-02-14T04:31:00.000","Title":"Lattice Multiplication with threads, is it more efficient?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a \"method\" to get a formula, formula which comes from fitting a set of data (3000 point). I was using Legendre polynomial, but for > 20 points it gives not exact values. I can write chi2 test, but algorithm needs a loot of time to calculate N parameters, and at the beginning I don't know how the function looks like, so it takes time. I was thinking about splines... Maybe ...\nSo the input is: 3000 pints\nOutput : f(x) = ... something\nI want to have a formula from fit. What is a best way to do this in python?\nLet the force would be with us!\nNykon","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":2988,"Q_Id":5021921,"Users Score":3,"Answer":"Except, a spline does not give you a \"formula\", at least not unless you have the wherewithal to deal with all of the piecewise segments. Even then, it will not be easily written down, or give you anything that is at all pretty to look at.\nA simple spline gives you an interpolant. Worse, for 3000 points, an interpolating spline will give you roughly that many cubic segments! You did say interpolation before. OF course, an interpolating polynomial of that high an order will be complete crapola anyway, so don't think you can just go back there.\nIf all that you need is a tool that can provide an exact interpolation at any point, and you really don't need to have an explicit formula, then an interpolating spline is a good choice.\nOr do you really want an approximant? A function that will APPROXIMATELY fit your data, smoothing out any noise? The fact is, a lot of the time when people who have no idea what they are doing say \"interpolation\" they really do mean approximation, smoothing. This is possible of course, but there are entire books written on the subject of curve fitting, the modeling of empirical data. You first goal is then to choose an intelligent model, that will represent this data. Best of course is if you have some intelligent choice of model from physical understanding of the relationship under study, then you can estimate the parameters of that model using a nonlinear regression scheme, of which there are many to be found.\nIf you have no model, and are unwilling to choose one that roughly has the proper shape, then you are left with generic models in the form of splines, which can be fit in a regression sense, or with high order polynomial models, for which I have little respect.\nMy point in all of this is YOU need to make some choices and do some research on a choice of model.","Q_Score":1,"Tags":"python,numpy,scipy,data-fitting","A_Id":5022089,"CreationDate":"2011-02-16T20:48:00.000","Title":"large set of data, interpolation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a \"method\" to get a formula, formula which comes from fitting a set of data (3000 point). I was using Legendre polynomial, but for > 20 points it gives not exact values. I can write chi2 test, but algorithm needs a loot of time to calculate N parameters, and at the beginning I don't know how the function looks like, so it takes time. I was thinking about splines... Maybe ...\nSo the input is: 3000 pints\nOutput : f(x) = ... something\nI want to have a formula from fit. What is a best way to do this in python?\nLet the force would be with us!\nNykon","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2988,"Q_Id":5021921,"Users Score":0,"Answer":"The only formula would be a polynomial of order 3000.\nHow good does the fit need to be? What type of formula do you expect?","Q_Score":1,"Tags":"python,numpy,scipy,data-fitting","A_Id":5022008,"CreationDate":"2011-02-16T20:48:00.000","Title":"large set of data, interpolation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Has anybody ever encountered problems with fmin_slsqp (or anything else in scipy.optimize) only when using very large or very small numbers?\nI am working on some python code to take a grayscale image and a mask, generate a histogram, then fit multiple gaussians to the histogram. To develop the code I used a small sample image, and after some work the code was working brilliantly. However, when I normalize the histogram first, generating bin values <<1, or when I histogram huge images, generating bin values in the hundreds of thousands, fmin_slsqp() starts failing sporadically. It quits after only ~5 iterations, usually just returning a slightly modified version of the initial guess I gave it, and returns exit mode 8, which means \"Positive directional derivative for linesearch.\" If I check the size of the bin counts at the beginning and scale them into the neighborhood of ~100-1000, fmin_slsqp() works as usual. I just un-scale things before returning the results. I guess I could leave it like that, but it feels like a hack.\nI have looked around and found folks talking about the epsilon value, which is basically the dx used for approximating derivatives, but tweaking that has not helped. Other than that I haven't found anything useful yet. Any ideas would be greatly appreciated. Thanks in advance.\njames","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":5001,"Q_Id":5023846,"Users Score":1,"Answer":"I got in trouble with this issue too, but I solved it in my project. I'm not sure if this is a general solution.\nThe reason was that the scipy.optimize.fmin_slsqp calculated the gradient by an approximate approach when the argument jac is set by False or default. The gradient produced from the approximate approach was not normalized (with large scale). When calculating the step length, large value of gradient would influence the performance and precision of line search. This might be the reason why we got Positive directional derivative for linesearch.\nYou can try to implement the closed form of the Jacobian matrix to the object function and pass it to the jac argument. More importantly, you should rescale the value of Jacobian matrix (like normalization) to avoid affecting line search.\nBest.","Q_Score":5,"Tags":"python,scipy,histogram,gaussian,least-squares","A_Id":34021509,"CreationDate":"2011-02-17T00:27:00.000","Title":"Problem with scipy.optimize.fmin_slsqp when using very large or very small numbers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Has anybody ever encountered problems with fmin_slsqp (or anything else in scipy.optimize) only when using very large or very small numbers?\nI am working on some python code to take a grayscale image and a mask, generate a histogram, then fit multiple gaussians to the histogram. To develop the code I used a small sample image, and after some work the code was working brilliantly. However, when I normalize the histogram first, generating bin values <<1, or when I histogram huge images, generating bin values in the hundreds of thousands, fmin_slsqp() starts failing sporadically. It quits after only ~5 iterations, usually just returning a slightly modified version of the initial guess I gave it, and returns exit mode 8, which means \"Positive directional derivative for linesearch.\" If I check the size of the bin counts at the beginning and scale them into the neighborhood of ~100-1000, fmin_slsqp() works as usual. I just un-scale things before returning the results. I guess I could leave it like that, but it feels like a hack.\nI have looked around and found folks talking about the epsilon value, which is basically the dx used for approximating derivatives, but tweaking that has not helped. Other than that I haven't found anything useful yet. Any ideas would be greatly appreciated. Thanks in advance.\njames","AnswerCount":3,"Available Count":3,"Score":0.2605204458,"is_accepted":false,"ViewCount":5001,"Q_Id":5023846,"Users Score":4,"Answer":"Are you updating your initial guess (\"x0\") when your underlying data changes scale dramatically? for any iterative linear optimization problem, these problems will occur if your initial guess is far from the data you're trying to fit. It's more of a optimization problem than a scipy problem.","Q_Score":5,"Tags":"python,scipy,histogram,gaussian,least-squares","A_Id":5690969,"CreationDate":"2011-02-17T00:27:00.000","Title":"Problem with scipy.optimize.fmin_slsqp when using very large or very small numbers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Has anybody ever encountered problems with fmin_slsqp (or anything else in scipy.optimize) only when using very large or very small numbers?\nI am working on some python code to take a grayscale image and a mask, generate a histogram, then fit multiple gaussians to the histogram. To develop the code I used a small sample image, and after some work the code was working brilliantly. However, when I normalize the histogram first, generating bin values <<1, or when I histogram huge images, generating bin values in the hundreds of thousands, fmin_slsqp() starts failing sporadically. It quits after only ~5 iterations, usually just returning a slightly modified version of the initial guess I gave it, and returns exit mode 8, which means \"Positive directional derivative for linesearch.\" If I check the size of the bin counts at the beginning and scale them into the neighborhood of ~100-1000, fmin_slsqp() works as usual. I just un-scale things before returning the results. I guess I could leave it like that, but it feels like a hack.\nI have looked around and found folks talking about the epsilon value, which is basically the dx used for approximating derivatives, but tweaking that has not helped. Other than that I haven't found anything useful yet. Any ideas would be greatly appreciated. Thanks in advance.\njames","AnswerCount":3,"Available Count":3,"Score":0.3215127375,"is_accepted":false,"ViewCount":5001,"Q_Id":5023846,"Users Score":5,"Answer":"I've had similar problems optimize.leastsq. The data I need to deal with often are very small, like 1e-18 and such, and I noticed that leastsq doesn't converge to best fit parameters in those cases. Only when I scale the data to something more common (like in hundreds, thousands, etc., something you can maintain resolution and dynamic range with integers), I can let leastsq converge to something very reasonable.\nI've been trying around with those optional tolerance parameters so that I don't have to scale data before optimizing, but haven't had much luck with it...\nDoes anyone know a good general approach to avoid this problem with the functions in the scipy.optimize package? I'd appreciate you could share... I think the root is the same problem with the OP's.","Q_Score":5,"Tags":"python,scipy,histogram,gaussian,least-squares","A_Id":8394696,"CreationDate":"2011-02-17T00:27:00.000","Title":"Problem with scipy.optimize.fmin_slsqp when using very large or very small numbers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, i have a set of points (x,y), and i want to be able to draw the largest polygon with these points as vertices. I can use patches.Polygon() in matplotlib, but this simply draws lines between the points in the order i give them. This doesn't automatically do what i want. As an example, if a want to draw a square, and sort the points by increasing x, and then by increasing y, i won't get a square, but two connecting triangles. (the line \"crosses over\")\nSo the problem now is to find a way to sort the list of points such that i \"go around the outside\" of the polygon when iterating over this list.\nOr is there maybe some other functionality in Matplotlib which can do this for me?","AnswerCount":8,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":5735,"Q_Id":5040412,"Users Score":2,"Answer":"From your comments to other answers, you seem to already get the set of points defining the convex hull, but they're not ordered. The easiest way to order them would be to take a point inside the convex hull as the origin of a new coordinate frame. You then transform the (most probably) Cartesian coordinates of your points into polar coordinates, with respect to this new frame. If you order your points with respect to their polar angle coordinate, you can draw your convex hull. This is only valid if the set of your points defined a convex (non-concave) hull.","Q_Score":9,"Tags":"python,matplotlib","A_Id":5040977,"CreationDate":"2011-02-18T10:54:00.000","Title":"How to draw the largest polygon from a set of points","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am quite new in Python.\nI am trying to use the nltk.cluster package to apply a simple kMeans to a word-document matrix. While it works when the matrix is a list of numpy array-like objects, I wasn't able to make it work for a sparse matrix representation (such as csc_matrix, csr_matrix or lil_matrix). \nAll the information that I found was:\n\nNote that the vectors must use numpy array-like objects. nltk_contrib.unimelb.tacohn.SparseArrays may be used for efficiency when required\n\nI do not understand what this means. Can anyone help me in this matter?\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":824,"Q_Id":5044359,"Users Score":1,"Answer":"It means that when you pass in the input vector, you can either pass in a numpy.array() or a nltk_contrib.unimelb.tacohn.SparseArrays.\nI suggest you look at the package nltk_contrib.unimelb.tacohn to find the SparseArrays class. Then try to create your data with this class before passing it into nltk.cluster","Q_Score":0,"Tags":"python,nltk","A_Id":5085486,"CreationDate":"2011-02-18T17:15:00.000","Title":"nltk.cluster using a sparse representation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to do named entity recognition in python using NLTK.\nI want to extract personal list of skills.\nI have the list of skills and would like to search them in requisition and tag the skills.\nI noticed that NLTK has NER tag for predefine tags like Person, Location etc.\nIs there a external gazetter tagger in Python I can use?\nany idea how to do it more sophisticated than search of terms ( sometimes multi words term )?\nThanks,\nAssaf","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2226,"Q_Id":5084578,"Users Score":1,"Answer":"I haven't used NLTK enough recently, but if you have words that you know are skills, you don't need to do NER- just a text search.\nMaybe use Lucene or some other search library to find the text, and then annotate it? That's a lot of work but if you are working with a lot of data that might be ok. Alternatively, you could hack together a regex search which will be slower but probably work ok for smaller amounts of data and will be much easier to implement.","Q_Score":4,"Tags":"python,nlp,nltk,named-entity-recognition","A_Id":6637984,"CreationDate":"2011-02-22T22:07:00.000","Title":"Named Entity Recognition from personal Gazetter using Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"well i've seen some code to convert RGB to HSL; but how to do it fast in python.\nIts strange to me, that for example photoshop does this within a second on a image, while in python this often takes forever. Well at least the code i use; so think i'm using wrong code to do it\nIn my case my image is a simple but big raw array [r,g,b,r,g,b,r,g,b ....]\nI would like this to be [h,s,l,h,s,l,h,s,l .......]\nAlso i would like to be able to do hsl to rgb\nthe image is actually 640x 480 pixels; \nWould it require some library or wrapper around c code (i never created a wrapper) to get it done fast ?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":6609,"Q_Id":5112248,"Users Score":1,"Answer":"One option is to use OpenCV. Their Python bindings are pretty good (although not amazing). The upside is that it is a very powerful library, so this would just be the tip of the iceberg.\nYou could probably also do this very efficiently using numpy.","Q_Score":3,"Tags":"python,c,rgb","A_Id":5112349,"CreationDate":"2011-02-25T00:26:00.000","Title":"Python RGB array to HSL and back","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What i need is a way to get \"fancy indexing\" (y = x[[0, 5, 21]]) to return a view instead of a copy. \nI have an array, but i want to be able to work with a subset of this array (specified by a list of indices) in such a way that the changes in this subset is also put into the right places in the large array. If i just want to do something with the first 10 elements, i can just use regular slicing y = x[0:10]. That works great, because regular slicing returns a view. The problem is if i don't want 0:10, but an arbitrary set of indices.\nIs there a way to do this?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3684,"Q_Id":5127991,"Users Score":0,"Answer":"You could theoretically create an object that performs the role of a 'fancy view' into another array, and I can think of plenty of use cases for it. The problem is, that such an object would not be compatible with the standard numpy machinery. All compiled numpy C code relies on data being accessible as an inner product of strides and indices. Generalizing this code to fundamentally different data layout formats would be a gargantuan undertaking. For a project that is trying to take on a challenge along these lines, check out continuum's Blaze.","Q_Score":19,"Tags":"python,numpy","A_Id":21030654,"CreationDate":"2011-02-26T16:05:00.000","Title":"Can I get a view of a numpy array at specified indexes? (a view from \"fancy indexing\")","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering if someone can clarify this line for me.\n\nCreate a function die(x) which rolls a\n die x times keeping track of how many\n times each face comes up and returns a\n 1X6 array containing these numbers.\n\nI am not sure what this means when it says 1X6 array ? I am using the randint function from numpy so the output is already an array (or list) im not sure.\nThanks","AnswerCount":5,"Available Count":2,"Score":0.1194272985,"is_accepted":false,"ViewCount":688,"Q_Id":5136533,"Users Score":3,"Answer":"Since a dice has 6 possible outcomes, if you get a 2 three times, this would be :\n0 3 0 0 0 0","Q_Score":1,"Tags":"python","A_Id":5136547,"CreationDate":"2011-02-27T22:35:00.000","Title":"python random number","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering if someone can clarify this line for me.\n\nCreate a function die(x) which rolls a\n die x times keeping track of how many\n times each face comes up and returns a\n 1X6 array containing these numbers.\n\nI am not sure what this means when it says 1X6 array ? I am using the randint function from numpy so the output is already an array (or list) im not sure.\nThanks","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":688,"Q_Id":5136533,"Users Score":1,"Answer":"if you have the results of the die rolls in a list lst, you can determine the number of times a 4 appeared by doing len([_ for _ in lst if _ == 4]). you should be able to figure the rest out from there.","Q_Score":1,"Tags":"python","A_Id":5136717,"CreationDate":"2011-02-27T22:35:00.000","Title":"python random number","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use a easy-to-use text miner with my python code. I will be mainly using the classification algorithms - Naive Bayes, KNN and such. \nPlease let me know what's the best option here - Weka? NLTK? SVM? or something else?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":142,"Q_Id":5150053,"Users Score":2,"Answer":"I would go with NLTK since it's written in Python","Q_Score":0,"Tags":"python","A_Id":5150092,"CreationDate":"2011-03-01T03:03:00.000","Title":"Best text miner to use with python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to use PyArray_NewFromDescr to create numpy array object from a set of contiguous 2d arrays, without copying the data?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":444,"Q_Id":5155597,"Users Score":1,"Answer":"Short answer, no.\nNumpy expects all the data to be laid it in a simple, strided pattern. When iterating over the array, to advance in a dimension, it adds a constant, the stride size for that dimension, to the position in memory. So unless your 2-d slices are laid out regularly (e.g. every other row of a larger 3-d array), numpy will need to copy the data.\nIf you do have that order, you can do what you'll want. You'll need to make a PyArray struct where the data points to the first item, the strides are correct for layout, and the descr is correct as well. Most importantly, you'll want to set the base member to another python object to keep your big chunk of memory alive while this view exists.","Q_Score":2,"Tags":"c++,python,numpy,python-c-api","A_Id":5159762,"CreationDate":"2011-03-01T14:09:00.000","Title":"creating a 3d numpy array from a non-contigous set of contigous 2d slices","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).\n\nIs there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?\nWhen you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity.","AnswerCount":11,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":47020,"Q_Id":5214369,"Users Score":12,"Answer":"I've been getting on very well with the Spyder IDE in the Python(x,y) distribution. I'm a long term user of Matlab and have known of the existence of Python for 10 years or so but it's only since I installed Python(x,y) that I've started using Python regularly.","Q_Score":26,"Tags":"python,matlab,ide","A_Id":6088759,"CreationDate":"2011-03-06T23:52:00.000","Title":"Python vs Matlab","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).\n\nIs there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?\nWhen you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity.","AnswerCount":11,"Available Count":5,"Score":0.0363476168,"is_accepted":false,"ViewCount":47020,"Q_Id":5214369,"Users Score":2,"Answer":"after long long tryouts with many editors,\ni have settled for aptana ide + ipython (including notebook in internet browser)\ngreat for editing, getting help easy, try fast new things\naptana is the same as eclipse (because of pydev) but aptana has themes and different little things eclipse lacks\nabout python a little,\ndon't forget pandas, as it's (i believe) extremely powerful tool for data analysis\nit will be a beast in the future, my opinion\ni'm researching matlab, and i see some neat things there, especially gui interfaces and some other nice things\nbut python gives you flexibility and ease,\nanyway, you still have to learn the basics of python, matplotlib, numpy (and eventually pandas)\nbut from what i see, numpy and matplotlib are similar to matplotlib concepts (probably they were created with matlab in mind, right?)","Q_Score":26,"Tags":"python,matlab,ide","A_Id":10388279,"CreationDate":"2011-03-06T23:52:00.000","Title":"Python vs Matlab","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).\n\nIs there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?\nWhen you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity.","AnswerCount":11,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":47020,"Q_Id":5214369,"Users Score":6,"Answer":"I've been in the engineering field for a while now and I've always used MATLAB for high-complexity math calculations. I never really had an major problems with it, but I wasn't super enthusiastic about it either. A few months ago I found out I was going to be a TA for a numerical methods class and that it would be taught using Python, so I would have to learn the language. \n What I at first thought would be extra work turned out to be an awesome hobby. I can't even begin to describe how bad MATLAB is compared to Python! What used to to take me all day to code in Matlab takes me only a few hours to write in Python. My code looks infinitely more appealing as well. Python's performance and flexibility really surprised me. With Python I can literally do anything I used to do in MATLAB and I can do it a lot better. \nIf anyone else is thinking about switching, I suggest you do it. It made my life a lot easier. I'll quote \"Python Scripting for Computational Science\" because they describe the pros of Python over MATLAB better than I do:\n\n\nthe python programming language is more powerful\nthe python environment is completely open and made for integration\n with external tools,\na complete toolbox\/module with lots of functions and classes can be contained in a single file (in contrast to a bunch of M-files),\ntransferring functions as arguments to functions is simpler,\nnested, heterogeneous data structures are simple to construct and use,\nobject-oriented programming is more convenient,\ninterfacing C, C++, and fortran code is better supported and therefore simpler,\nscalar functions work with array arguments to a larger extent (without modifications of arithmetic operators),\nthe source is free and runs on more platforms.","Q_Score":26,"Tags":"python,matlab,ide","A_Id":41910220,"CreationDate":"2011-03-06T23:52:00.000","Title":"Python vs Matlab","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).\n\nIs there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?\nWhen you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity.","AnswerCount":11,"Available Count":5,"Score":0.0363476168,"is_accepted":false,"ViewCount":47020,"Q_Id":5214369,"Users Score":2,"Answer":"almost everything is covered by others .. i hope you don't need any toolboxes like optimizarion toolbox , neural network etc.. [ I didn't find these for python may be there are some .. i seriously doubt they might be better than Matlab ones..]\nif u don't need symbolic manipulation capability and are using windows python(x,y) is the way to go[they don't have much activity on their linux port (older versions are available)]\n(or need some minor symbolic manipulations use sympy , i think it comes with EPD and python(x,y) supersedes\/integrates EPD)\nif you need symbolic capabilities sage is the way to go, IMHO sage stands up good with Matlab as well as Mathematica ..\ni'm also trying to make a switch ...(need for my engg projs)\ni hope it helps ..","Q_Score":26,"Tags":"python,matlab,ide","A_Id":5214892,"CreationDate":"2011-03-06T23:52:00.000","Title":"Python vs Matlab","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).\n\nIs there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?\nWhen you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity.","AnswerCount":11,"Available Count":5,"Score":0.0181798149,"is_accepted":false,"ViewCount":47020,"Q_Id":5214369,"Users Score":1,"Answer":"I have recently switched from MATLAB to Python (I am about 2 months into the transition), and am getting on fairly well using Sublime Text 2, using the SublimeRope and SublimeLinter plugins to provide some IDE-like capabilities, as well as pudb to provide some graphical interactive debugging capabilities.\nI have not yet explored profilers or variable editors. (I never really used the MATLAB variable editor, anyway).","Q_Score":26,"Tags":"python,matlab,ide","A_Id":9317942,"CreationDate":"2011-03-06T23:52:00.000","Title":"Python vs Matlab","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What are the differences between python's numpy.ndarray and list datatypes? I have vague ideas, but would like to get a definitive answer about:\n\nSize in memory\nSpeed \/ order of access\nSpeed \/ order of modification in place but preserving length\nEffects of changing length \n\nThanks!","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":5003,"Q_Id":5224420,"Users Score":6,"Answer":"There are several differences:\n\nYou can append elements to a list, but you can't change the size of a\n\u00b4numpy.ndarray\u00b4 without making a full copy.\nLists can containt about everything, in numpy arrays all the\nelements must have the same type.\nIn practice, numpy arrays are faster for vectorial functions than\nmapping functions to lists.\nI think than modification times is not an issue, but iteration over\nthe elements is.\nNumpy arrays have many array related methods (\u00b4argmin\u00b4, \u00b4min\u00b4, \u00b4sort\u00b4,\netc).\n\nI prefer to use numpy arrays when I need to do some mathematical operations (sum, average, array multiplication, etc) and list when I need to iterate in 'items' (strings, files, etc).","Q_Score":4,"Tags":"python,arrays,list,performance,numpy","A_Id":7501276,"CreationDate":"2011-03-07T19:42:00.000","Title":"Differences between python's numpy.ndarray and list datatypes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large corpus of n-grams and several external n-grams. I want to calculate the PMI score of each external n-gram based on this corpus (the counts). \nAre there any tools to do this or can someone provide me with a piece of code in Python that can do this?\nThe problem is that my n-grams are 2-grams, 3-grams, 4-grams, and 5-grams. So calculating probabilities for 3-grams and more are really time-consuming.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4572,"Q_Id":5231627,"Users Score":5,"Answer":"If I'm understanding your problem correctly, you want to compute things like log { P(\"x1 x2 x3 x4 x5\") \/ P(\"x1\") P(\"x2\") ... P(\"x5\") } where P measures the probability that any given 5-gram or 1-gram is a given thing (and is basically a ratio of counts, perhaps with Laplace-style offsets). So, make a single pass through your corpus and store counts of (1) each 1-gram, (2) each n-gram (use a dict for the latter), and then for each external n-gram you do a few dict lookups, a bit of arithmetic, and you're done. One pass through the corpus at the start, then a fixed amount of work per external n-gram.\n(Note: Actually I'm not sure how one defines PMI for more than two random variables; perhaps it's something like log P(a)P(b)P(c)P(abc) \/ P(ab)P(bc)P(a_c). But if it's anything at all along those lines, you can do it the same way: iterate through your corpus counting lots of things, and then all the probabilities you need are simply ratios of the counts, perhaps with Laplace-ish corrections.)\nIf your corpus is so big that you can't fit the n-gram dict in memory, then divide it into kinda-memory-sized chunks, compute n-gram dicts for each chunk and store them on disc in a form that lets you get at any given n-gram's entry reasonably efficiently; then, for each extern n-gram, go through the chunks and add up the counts.\nWhat form? Up to you. One simple option: in lexicographic order of the n-gram (note: if you're working with words rather than letters, you may want to begin by turning words into numbers; you'll want a single preliminary pass over your corpus to do this); then finding the n-gram you want is a binary search or something of the kind, which with chunks 1GB in size would mean somewhere on the order of 15-20 seeks per chunk; you could add some extra indexing to reduce this. Or: use a hash table on disc, with Berkeley DB or something; in that case you can forgo the chunking. Or, if the alphabet is small (e.g., these are letter n-grams rather than word n-grams and you're processing plain English text), just store them in a big array, with direct lookup -- but in that case, you can probably fit the whole thing in memory anyway.","Q_Score":4,"Tags":"python,n-gram","A_Id":5231844,"CreationDate":"2011-03-08T11:09:00.000","Title":"Calculating point-wise mutual information (PMI) score for n-grams in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to pipe numpy data (from one python script ) into the other?\nsuppose that script1.py looks like this:\nx = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})\nprint x\nSuppose that from the linux command, I run the following:\npython script1.py | script2.py\nWill script2.py get the piped numpy data as an input (stdin)? will the data still be in the same format of numpy? (so that I can, for example, perform numpy operations on it from within script2.py)?","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":751,"Q_Id":5268391,"Users Score":3,"Answer":"Check out the save and load functions. I don't think they would object to being passed a pipe instead of a file.","Q_Score":1,"Tags":"python,linux,ubuntu,numpy,pipe","A_Id":5268722,"CreationDate":"2011-03-11T02:39:00.000","Title":"Pipe numpy data in Linux?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to pipe numpy data (from one python script ) into the other?\nsuppose that script1.py looks like this:\nx = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})\nprint x\nSuppose that from the linux command, I run the following:\npython script1.py | script2.py\nWill script2.py get the piped numpy data as an input (stdin)? will the data still be in the same format of numpy? (so that I can, for example, perform numpy operations on it from within script2.py)?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":751,"Q_Id":5268391,"Users Score":2,"Answer":"No, data is passed through a pipe as text. You'll need to serialize the data in script1.py before writing, and deserialize it in script2.py after reading.","Q_Score":1,"Tags":"python,linux,ubuntu,numpy,pipe","A_Id":5268401,"CreationDate":"2011-03-11T02:39:00.000","Title":"Pipe numpy data in Linux?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to make a program in python that chooses cars from an array, that is filled with the mass of 10 cars. The idea is that it fills a barge, that can hold ~8 tonnes most effectively and minimum space is left unfilled. My idea is, that it makes variations of the masses and chooses one, that is closest to the max weight. But since I'm new to algorithms, I don't have a clue how to do it","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":230,"Q_Id":5299280,"Users Score":1,"Answer":"I'd solve this exercise with dynamic programming. You should be able to get the optimal solution in O(m*n) operations (n beeing the number of cars, m beeing the total mass).\nThat will only work however if the masses are all integers.\nIn general you have a binary linear programming problem. Those are very hard in general (NP-complete).\nHowever, both ways lead to algorithms which I wouldn't consider to be beginners material. You might be better of with trial and error (as you suggested) or simply try every possible combination.","Q_Score":2,"Tags":"python,algorithm,variations","A_Id":5299434,"CreationDate":"2011-03-14T13:35:00.000","Title":"Program, that chooses the best out of 10","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"BACKGROUND\nThe issue I'm working with is as follows:\n\nWithin the context of an experiment I am designing for my research, I produce a large number of large (length 4M) arrays which are somewhat sparse, and thereby could be stored as scipy.sparse.lil_matrix instances, or simply as scipy.array instances (the space gain\/loss isn't the issue here).\nEach of these arrays must be paired with a string (namely a word) for the data to make sense, as they are semantic vectors representing the meaning of that string. I need to preserve this pairing.\nThe vectors for each word in a list are built one-by-one, and stored to disk before moving on to the next word.\nThey must be stored to disk in a manner which could be then retrieved with dictionary-like syntax. For example if all the words are stored in a DB-like file, I need to be able to open this file and do things like vector = wordDB[word].\n\nCURRENT APPROACH\nWhat I'm currently doing:\n\nUsing shelve to open a shelf named wordDB\nEach time the vector (currently using lil_matrix from scipy.sparse) for a word is built, storing the vector in the shelf: wordDB[word] = vector\nWhen I need to use the vectors during the evaluation, I'll do the reverse: open the shelf, and then recall vectors by doing vector = wordDB[word] for each word, as they are needed, so that not all the vectors need be held in RAM (which would be impossible).\n\nThe above 'solution' fits my needs in terms of solving the problem as specified. The issue is simply that when I wish to use this method to build and store vectors for a large amount of words, I simply run out of disk space.\nThis is, as far as I can tell, because shelve pickles the data being stored, which is not an efficient way of storing large arrays, thus rendering this storage problem intractable with shelve for the number of words I need to deal with.\nPROBLEM\nThe question is thus: is there a way of serializing my set of arrays which will:\n\nSave the arrays themselves in compressed binary format akin to the .npy files generated by scipy.save?\nMeet my requirement that the data be readable from disk as a dictionary, maintaining the association between words and arrays?","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3381,"Q_Id":5330010,"Users Score":2,"Answer":"I would suggest to use scipy.save and have an dictionnary between the word and the name of the files.","Q_Score":7,"Tags":"python,serialization,numpy,scipy","A_Id":5330143,"CreationDate":"2011-03-16T18:30:00.000","Title":"Dictionary-like efficient storing of scipy\/numpy arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a problem and one solution would require an input of every 14x10 matrix that is possible to be made up of 1's and 0's... how can I generate these so that I can input every possible 14x10 matrix into another function? Thank you! \nAdded March 21: It looks like I didn't word my post appropriately. Sorry. What I'm trying to do is optimize the output of 10 different production units (given different speeds and amounts of downtime) for several scenarios. My goal is to place blocks of downtime to minimized the differences in production on a day-to-day basis. The amount of downtime and frequency each unit is allowed is given. I am currently trying to evaluate a three week cycle, meaning every three weeks each production unit is taken down for a given amount of hours. I was asking the computer to determine the order the units would be taken down based on the constraint that the lines come down only once every 3 weeks and the difference in daily production is the smallest possible. My first approach was to use Excel (as I tried to describe above) and it didn't work (no suprise there)... where 1- running, 0- off and when these are summed to calculate production. The calculated production is subtracted from a set max daily production. Then, these differences were compared going from Mon-Tues, Tues-Wed, etc for a three week time frame and minimized using solver. My next approach was to write a Matlab code where the input was a tolerance (set allowed variation day-to-day). Is there a program that already does this or an approach to do this easiest? It seems simple enough, but I'm still thinking through the different ways to go about this. Any insight would be much appreciated.","AnswerCount":11,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":3156,"Q_Id":5357171,"Users Score":6,"Answer":"Generating Every possible matrix of 1's and 0's for 14*10 would generate 2**140 matrixes. I don't believe you would have enough lifetime for this. I don't know, if the sun would still shine before you finish that. This is why it is impossible to generate all those matrices. You must look for some other solution, this looks like a brute force.","Q_Score":1,"Tags":"python,arrays,matlab,matrix","A_Id":5357196,"CreationDate":"2011-03-18T19:55:00.000","Title":"how to generate all possible combinations of a 14x10 matrix containing only 1's and 0's","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a problem and one solution would require an input of every 14x10 matrix that is possible to be made up of 1's and 0's... how can I generate these so that I can input every possible 14x10 matrix into another function? Thank you! \nAdded March 21: It looks like I didn't word my post appropriately. Sorry. What I'm trying to do is optimize the output of 10 different production units (given different speeds and amounts of downtime) for several scenarios. My goal is to place blocks of downtime to minimized the differences in production on a day-to-day basis. The amount of downtime and frequency each unit is allowed is given. I am currently trying to evaluate a three week cycle, meaning every three weeks each production unit is taken down for a given amount of hours. I was asking the computer to determine the order the units would be taken down based on the constraint that the lines come down only once every 3 weeks and the difference in daily production is the smallest possible. My first approach was to use Excel (as I tried to describe above) and it didn't work (no suprise there)... where 1- running, 0- off and when these are summed to calculate production. The calculated production is subtracted from a set max daily production. Then, these differences were compared going from Mon-Tues, Tues-Wed, etc for a three week time frame and minimized using solver. My next approach was to write a Matlab code where the input was a tolerance (set allowed variation day-to-day). Is there a program that already does this or an approach to do this easiest? It seems simple enough, but I'm still thinking through the different ways to go about this. Any insight would be much appreciated.","AnswerCount":11,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3156,"Q_Id":5357171,"Users Score":0,"Answer":"Instead of just suggesting the this is unfeasible, I would suggest considering a scheme that samples the important subset of all possible combinations instead of applying a brute force approach. As one of your replies suggested, you are doing minimization. There are numerical techniques to do this such as simulated annealing, monte carlo sampling as well as traditional minimization algorithms. You might want to look into whether one is appropriate in your case.","Q_Score":1,"Tags":"python,arrays,matlab,matrix","A_Id":5357340,"CreationDate":"2011-03-18T19:55:00.000","Title":"how to generate all possible combinations of a 14x10 matrix containing only 1's and 0's","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a problem and one solution would require an input of every 14x10 matrix that is possible to be made up of 1's and 0's... how can I generate these so that I can input every possible 14x10 matrix into another function? Thank you! \nAdded March 21: It looks like I didn't word my post appropriately. Sorry. What I'm trying to do is optimize the output of 10 different production units (given different speeds and amounts of downtime) for several scenarios. My goal is to place blocks of downtime to minimized the differences in production on a day-to-day basis. The amount of downtime and frequency each unit is allowed is given. I am currently trying to evaluate a three week cycle, meaning every three weeks each production unit is taken down for a given amount of hours. I was asking the computer to determine the order the units would be taken down based on the constraint that the lines come down only once every 3 weeks and the difference in daily production is the smallest possible. My first approach was to use Excel (as I tried to describe above) and it didn't work (no suprise there)... where 1- running, 0- off and when these are summed to calculate production. The calculated production is subtracted from a set max daily production. Then, these differences were compared going from Mon-Tues, Tues-Wed, etc for a three week time frame and minimized using solver. My next approach was to write a Matlab code where the input was a tolerance (set allowed variation day-to-day). Is there a program that already does this or an approach to do this easiest? It seems simple enough, but I'm still thinking through the different ways to go about this. Any insight would be much appreciated.","AnswerCount":11,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3156,"Q_Id":5357171,"Users Score":0,"Answer":"Are you saying that you have a table with 140 cells and each value can be 1 or 0 and you'd like to generate every possible output? If so, you would have 2^140 possible combinations...which is quite a large number.","Q_Score":1,"Tags":"python,arrays,matlab,matrix","A_Id":5357209,"CreationDate":"2011-03-18T19:55:00.000","Title":"how to generate all possible combinations of a 14x10 matrix containing only 1's and 0's","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently I am working on a python project that contains sub modules and uses numpy\/scipy. Ipython is used as interactive console. Unfortunately I am not very happy with workflow that I am using right now, I would appreciate some advice.\nIn IPython, the framework is loaded by a simple import command. However, it is often necessary to change code in one of the submodules of the framework. At this point a model is already loaded and I use IPython to interact with it. \nNow, the framework contains many modules that depend on each other, i.e. when the framework is initially loaded the main module is importing and configuring the submodules. The changes to the code are only executed if the module is reloaded using reload(main_mod.sub_mod). This is cumbersome as I need to reload all changed modules individually using the full path. It would be very convenient if reload(main_module) would also reload all sub modules, but without reloading numpy\/scipy..","AnswerCount":14,"Available Count":2,"Score":0.0142847425,"is_accepted":false,"ViewCount":217455,"Q_Id":5364050,"Users Score":1,"Answer":"Note that the above mentioned autoreload only works in IntelliJ if you manually save the changed file (e.g. using ctrl+s or cmd+s). It doesn't seem to work with auto-saving.","Q_Score":475,"Tags":"python,ipython","A_Id":58834043,"CreationDate":"2011-03-19T18:39:00.000","Title":"Reloading submodules in IPython","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently I am working on a python project that contains sub modules and uses numpy\/scipy. Ipython is used as interactive console. Unfortunately I am not very happy with workflow that I am using right now, I would appreciate some advice.\nIn IPython, the framework is loaded by a simple import command. However, it is often necessary to change code in one of the submodules of the framework. At this point a model is already loaded and I use IPython to interact with it. \nNow, the framework contains many modules that depend on each other, i.e. when the framework is initially loaded the main module is importing and configuring the submodules. The changes to the code are only executed if the module is reloaded using reload(main_mod.sub_mod). This is cumbersome as I need to reload all changed modules individually using the full path. It would be very convenient if reload(main_module) would also reload all sub modules, but without reloading numpy\/scipy..","AnswerCount":14,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":217455,"Q_Id":5364050,"Users Score":0,"Answer":"Any subobjects will not be reloaded by this, I believe you have to use IPython's deepreload for that.","Q_Score":475,"Tags":"python,ipython","A_Id":57911718,"CreationDate":"2011-03-19T18:39:00.000","Title":"Reloading submodules in IPython","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have read the topic: How do I calculate a trendline for a graph? \nWhat I am looking for though is how to find the line that touches the outer extreme points of a graph. The intended use is calculation of support, resistance lines for stock charts. So it is not a merely simple regression but it should also limit the number of touch points and there should be a way to find the relevant interval.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2927,"Q_Id":5377347,"Users Score":0,"Answer":"You could think about using a method that calculates the concave hull of your data. There are probably existing python implementations you could find. This will give you the boundary that encloses your timeseries. If there are outliers in your dataset that you wish to exclude, you could apply some sort of filter or smoothing to your data before you calculate the concave hull. I'm not 100% sure what you mean by \"limit the number of touch points\" and \"find the relevant interval\", but hopefully this will get you started.","Q_Score":0,"Tags":"python,stocks","A_Id":5377789,"CreationDate":"2011-03-21T12:07:00.000","Title":"how to calculate trendline for stock charts","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My primary language is Python. Often when I need to do some cpu heavy task on a numpy array I use scipy.weave.inline to hook up c++ with great results.\nI suspect many of the algorithms (machine learning stuff) can however be written simpler in a functional language (scheme, haskell...).\nI was thinking. Is it possible to access numpy array data (read and write) from a functional language instead of having to use c++?","AnswerCount":4,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":887,"Q_Id":5392551,"Users Score":-2,"Answer":"I can't imagine trying to use numpy through haskell or scheme will be easier than just writing functional python. Try using itertools and functools if you want a more functional flavored python.","Q_Score":5,"Tags":"python,haskell,functional-programming,numpy,scipy","A_Id":5392702,"CreationDate":"2011-03-22T14:08:00.000","Title":"access numpy array from a functional language","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to create about 2 million vectors w\/ 1000 slots in each (each slot merely contains an integer).\nWhat would be the best data structure for working with this amount of data? It could be that I'm over-estimating the amount of processing\/memory involved.\nI need to iterate over a collection of files (about 34.5GB in total) and update the vectors each time one of the the 2-million items (each corresponding to a vector) is encountered on a line.\nI could easily write code for this, but I know it wouldn't be optimal enough to handle the volume of the data, which is why I'm asking you experts. :)\nBest,\nGeorgina","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1730,"Q_Id":5397809,"Users Score":1,"Answer":"Use a sparse matrix assuming most entries are 0.","Q_Score":3,"Tags":"python,data-structures,vector,matrix,large-data-volumes","A_Id":5397912,"CreationDate":"2011-03-22T21:04:00.000","Title":"Python - Best data structure for incredibly large matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to plot a series of rectangles and lines based on a tab delimited text file in matplotlib. The coordinates are quite large in the data and shown be drawn to scale -- except scaled down by some factor X -- in matplotlib. \nWhat's the easiest way to do this in matplotlib? I know that there are transformations, but I am not sure how to define my own transformation (i.e. where the origin is and what the scale factor is) in matplotlib and have it easily convert between \"data space\" and \"plot space\". Can someone please show a quick example or point me to the right place?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1909,"Q_Id":5463924,"Users Score":0,"Answer":"If you simply use matplotlib's plot function, the plot will fit into one online window, so you don't really need to 'rescale' explicitly. Linearly rescaling is pretty easy, if you include some code sample to show your formatting of the data, somebody can help you in translating the origin and scaling the coordinates.","Q_Score":2,"Tags":"python,numpy,matplotlib,scipy","A_Id":5627073,"CreationDate":"2011-03-28T19:19:00.000","Title":"transforming coordinates in matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am in a bit of a pickle. I am trying to write plotting and fitting extensions to a Fortran77 (why this program was rewritten in F77 is a mystery too me, btw) code that requires command line input, i.e. it prompts the user for input. Currently the program uses GNUplot to plot, but the GNUplot fitting routine is less than ideal in my eyes, and calling GNUplot from Fortran is a pain in the ass to say the least.\nI have mostly been working with Numpy, Scipy and Matplotlib to satisfy my fitting and plotting needs. I was wondering if there is a way to call the F77 program in Python and then have it run like I would any other F77 program until the portion where I need it to fit and spit out some nice plots (none of this GNUplot stuff).\nI know about F2PY, but I have heard mixed things about it. I have also contemplated using pyexpect and go from there, but I have have bad experience with the way it handles changing expected prompts on the screen (or I am just using it incorrectly).\nThanks for any info on this.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3047,"Q_Id":5465982,"Users Score":2,"Answer":"Can't you just dump the data generated by the Fortran program to a file and then read it from python ?\nNumpy can read a binary file and treat it as a array.\nGoing from here to matplotlib then should be a breeeze.","Q_Score":2,"Tags":"python,numpy,scipy,fortran77","A_Id":5466008,"CreationDate":"2011-03-28T22:54:00.000","Title":"Run Fortran command line program within Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have pretty large files, the order of 1-1.5 GB combined (mostly log files) with raw data that is easily parseable to a csv, which is subsequently supposed to be graphed to generate a set of graph images.\nCurrently, we are using bash scripts to turn the raw data into a csv file, with just the numbers that need to be graphed, and then feeding it into a gnuplot script. But this process is extremely slow. I tried to speed up the bash scripts by replacing some piped cuts, trs etc. with a single awk command, although this improved the speed, the whole thing is still very slow.\nSo, I am starting to believe there are better tools for this process. I am currently looking to rewrite this process in python+numpy or R. A friend of mine suggested using the JVM, and if I am to do that, I will use clojure, but am not sure how the JVM will perform.\nI don't have much experience in dealing with these kind of problems, so any advice on how to proceed would be great. Thanks.\nEdit: Also, I will want to store (to disk) the generated intermediate data, i.e., the csv, so I don't have to re-generate it, should I choose I want a different looking graph.\nEdit 2: The raw data files have one record per one line, whose fields are separated by a delimiter (|). Not all fields are numbers. Each field I need in the output csv is obtained by applying a certain formula on the input records, which may use multiple fields from the input data. The output csv will have 3-4 fields per line, and I need graphs that plot 1-2, 1-3, 1-4 fields in a (may be) bar chart. I hope that gives a better picture.\nEdit 3: I have modified @adirau's script a little and it seems to be working pretty well. I have come far enough that I am reading data, sending to a pool of processor threads (pseudo processing, append thread name to data), and aggregating it into an output file, through another collector thread.\nPS: I am not sure about the tagging of this question, feel free to correct it.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2960,"Q_Id":5468921,"Users Score":1,"Answer":"I think python+Numpy would be the most efficient way, regarding speed and ease of implementation.\nNumpy is highly optimized so the performance is decent, and python would ease up the algorithm implementation part.\nThis combo should work well for your case, providing you optimize the loading of the file on memory, try to find the middle point between processing a data block that isn't too large but large enough to minimize the read and write cycles, because this is what will slow down the program\nIf you feel that this needs more speeding up (which i sincerely doubt), you could use Cython to speed up the sluggish parts.","Q_Score":5,"Tags":"python,r,numpy,large-files,graphing","A_Id":5469268,"CreationDate":"2011-03-29T06:53:00.000","Title":"Reading and graphing data read from huge files","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to capture WordNet selectional restrictions (such as +animate, +human, etc.) from synsets through NLTK?\nOr is there any other way of providing semantic information about synset? The closest I could get to it were hypernym relations.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":820,"Q_Id":5493565,"Users Score":0,"Answer":"You could try using some of the similarity functions with handpicked synsets, and use that to filter. But it's essentially the same as following the hypernym tree - afaik all the wordnet similarity functions use hypernym distance in their calculations. Also, there's a lot of optional attributes of a synset that might be worth exploring, but their presence can be very inconsistent.","Q_Score":7,"Tags":"python,nlp,nltk,wordnet","A_Id":5494360,"CreationDate":"2011-03-30T23:10:00.000","Title":"Wordnet selectional restrictions in NLTK","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a long-running script, and the operations that it performs can be broken down into independent, but order-sensitive function calls. In addition, the pattern here is a bread-first walk over a graph, and restarting a depth after a crash is not a happy option. \nSo then, my idea is as follows: as the script runs, maintain a list of function-argument pairs. This list corresponds to the open list of a depth-first search of the graph. The data is readily accessible, so I don't need to be concerned about losing it. \nIn the (probably unavoidable) event of a crash due to network conditions or an error, the list of function argument pairs is enough to resume the operation. \nSo then, can I store the functions? My intuition says no, but I'd like some advice before I judge either way.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":852,"Q_Id":5526981,"Users Score":2,"Answer":"You can pickle plain functions. Python stores the name and module and uses that to reload it. If you want to do the same with methods and such that's a little bit trickier. It can be done by breaking such objects down into component parts and pickling those.","Q_Score":1,"Tags":"python,pickle","A_Id":5527011,"CreationDate":"2011-04-03T01:29:00.000","Title":"Can you pickle methods python method objects using any pickle module?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a long-running script, and the operations that it performs can be broken down into independent, but order-sensitive function calls. In addition, the pattern here is a bread-first walk over a graph, and restarting a depth after a crash is not a happy option. \nSo then, my idea is as follows: as the script runs, maintain a list of function-argument pairs. This list corresponds to the open list of a depth-first search of the graph. The data is readily accessible, so I don't need to be concerned about losing it. \nIn the (probably unavoidable) event of a crash due to network conditions or an error, the list of function argument pairs is enough to resume the operation. \nSo then, can I store the functions? My intuition says no, but I'd like some advice before I judge either way.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":852,"Q_Id":5526981,"Users Score":0,"Answer":"If the arguments are all that matters, why not store the arguments? Just store it as a straight tuple.","Q_Score":1,"Tags":"python,pickle","A_Id":5526985,"CreationDate":"2011-04-03T01:29:00.000","Title":"Can you pickle methods python method objects using any pickle module?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to specify your own distance function using scikit-learn K-Means Clustering?","AnswerCount":8,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":105962,"Q_Id":5529625,"Users Score":17,"Answer":"Yes you can use a difference metric function; however, by definition, the k-means clustering algorithm relies on the eucldiean distance from the mean of each cluster. \nYou could use a different metric, so even though you are still calculating the mean you could use something like the mahalnobis distance.","Q_Score":213,"Tags":"python,machine-learning,cluster-analysis,k-means,scikit-learn","A_Id":9875395,"CreationDate":"2011-04-03T12:39:00.000","Title":"Is it possible to specify your own distance function using scikit-learn K-Means Clustering?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to specify your own distance function using scikit-learn K-Means Clustering?","AnswerCount":8,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":105962,"Q_Id":5529625,"Users Score":57,"Answer":"Unfortunately no: scikit-learn current implementation of k-means only uses Euclidean distances.\nIt is not trivial to extend k-means to other distances and denis' answer above is not the correct way to implement k-means for other metrics.","Q_Score":213,"Tags":"python,machine-learning,cluster-analysis,k-means,scikit-learn","A_Id":5531148,"CreationDate":"2011-04-03T12:39:00.000","Title":"Is it possible to specify your own distance function using scikit-learn K-Means Clustering?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to specify your own distance function using scikit-learn K-Means Clustering?","AnswerCount":8,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":105962,"Q_Id":5529625,"Users Score":4,"Answer":"Sklearn Kmeans uses the Euclidean distance. It has no metric parameter. This said, if you're clustering time series, you can use the tslearn python package, when you can specify a metric (dtw, softdtw, euclidean).","Q_Score":213,"Tags":"python,machine-learning,cluster-analysis,k-means,scikit-learn","A_Id":56324541,"CreationDate":"2011-04-03T12:39:00.000","Title":"Is it possible to specify your own distance function using scikit-learn K-Means Clustering?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a very large matrix(10x55678) in \"numpy\" matrix format. the rows of this matrix correspond to some \"topics\" and the columns correspond to words(unique words from a text corpus). Each entry i,j in this matrix is a probability, meaning that word j belongs to topic i with probability x. since I am using ids rather than the real words and since the dimension of my matrix is really large I need to visualized it in a way.Which visualization do you suggest? a simple plot? or a more sophisticated and informative one?(i am asking these cause I am ignorant about the useful types of visualization). If possible can you give me an example that using a numpy matrix? thanks\nthe reason I asked this question is that I want to have a general view of the word-topic distributions in my corpus. any other methods are welcome","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":30834,"Q_Id":5552641,"Users Score":2,"Answer":"The key thing to consider is whether you have important structure along both dimensions in the matrix. If you do then it's worth trying a colored matrix plot (e.g., imshow), but if your ten topics are basically independent, you're probably better off doing ten individual line or histogram plots. Both plots have advantages and disadvantages. \nIn particular, in full matrix plots, the z-axis color values are not very precise or quantitative, so its difficult to see, for example, small ripples on a trend, or quantitative assessments of rates of change, etc, so there's a significant cost to these. And they are also more difficult to pan and zoom since one can get lost and therefore not examine the entire plot, whereas panning along a 1D plot is trivial.\nAlso, of course, as others have mentioned, 50K points is too many to actually visualize, so you'll need to sort them, or something, to reduce the number of values that you'll actually need to visually assess.\nIn practice though, finding a good visualizing technique for a given data set is not always trivial, and for large and complex data sets, people try everything that has a chance of being helpful, and then choose what actually helps.","Q_Score":8,"Tags":"python,numpy,matplotlib,plot,visualization","A_Id":5555313,"CreationDate":"2011-04-05T13:30:00.000","Title":"plotting a 2D matrix in python, code and most useful visualization","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to create a python image processing program which reads in two images, one containing a single object and the other containing several objects. However, the first images object is present in the second image but is surrounded by other objects (some similar). \nThe images are both the same size but I am having problems in finding a method of comparing the images, picking out the matching object and then also placing a cross, or pointer of some sort on top of the object which is present in both images. \nThe Program should therefore open up both images originally needing to be compared, then after the comparison has taken place the image containing many objects should be displayed but with a pointer on the object most similar (matching) the object in the first image.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":711,"Q_Id":5555975,"Users Score":1,"Answer":"I guess the most straightforward way to achieve this is to compute the correlation map of the two images. Just convolve the two images using a scientific library such as scipy, apply a low pass filter and find the maximum value of the result.\nYou should check out the following packages:\n\nnumpy\nscipy\nmatplotlib\nPIL if your images are not in png format","Q_Score":1,"Tags":"python,algorithm,image-processing","A_Id":5556151,"CreationDate":"2011-04-05T17:31:00.000","Title":"Finding an object within an image containing many objects (Python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to check if a matrix is positive definite or positive semidefinite using Python.\nHow can I do that? Is there a dedicated function in SciPy for that or in other modules?","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":19269,"Q_Id":5563743,"Users Score":0,"Answer":"One good solution is to calculate all the minors of determinants and check they are all non negatives.","Q_Score":13,"Tags":"python,math,matrix,scipy,linear-algebra","A_Id":5715781,"CreationDate":"2011-04-06T08:51:00.000","Title":"Check for positive definiteness or positive semidefiniteness","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to check if a matrix is positive definite or positive semidefinite using Python.\nHow can I do that? Is there a dedicated function in SciPy for that or in other modules?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":19269,"Q_Id":5563743,"Users Score":0,"Answer":"an easier method is to calculate the determinants of the minors for this matrx.","Q_Score":13,"Tags":"python,math,matrix,scipy,linear-algebra","A_Id":5565951,"CreationDate":"2011-04-06T08:51:00.000","Title":"Check for positive definiteness or positive semidefiniteness","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to check if a matrix is positive definite or positive semidefinite using Python.\nHow can I do that? Is there a dedicated function in SciPy for that or in other modules?","AnswerCount":5,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":19269,"Q_Id":5563743,"Users Score":18,"Answer":"I assume you already know your matrix is symmetric. \nA good test for positive definiteness (actually the standard one !) is to try to compute its Cholesky factorization. It succeeds iff your matrix is positive definite.\nThis is the most direct way, since it needs O(n^3) operations (with a small constant), and you would need at least n matrix-vector multiplications to test \"directly\".","Q_Score":13,"Tags":"python,math,matrix,scipy,linear-algebra","A_Id":5563883,"CreationDate":"2011-04-06T08:51:00.000","Title":"Check for positive definiteness or positive semidefiniteness","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been trying to generate data in Excel.\nI generated .CSV file.\nSo up to that point it's easy.\nBut generating graph is quite hard in Excel...\nI am wondering, is python able to generate data AND graph in excel?\nIf there are examples or code snippets, feel free to post it :)\nOr a workaround can be use python to generate graph in graphical format like .jpg, etc or .pdf file is also ok..as long as workaround doesn't need dependency such as the need to install boost library.","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":48351,"Q_Id":5568319,"Users Score":2,"Answer":"I suggest you to try gnuplot while drawing graph from data files.","Q_Score":6,"Tags":"python,excel,charts,export-to-excel","A_Id":5568485,"CreationDate":"2011-04-06T14:47:00.000","Title":"use python to generate graph in excel","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm confused about what the different between axes and axis is in matplotlib. Could someone please explain in an easy-to-understand way?","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":16337,"Q_Id":5575451,"Users Score":15,"Answer":"in the context of matplotlib,\naxes is not the plural form of axis, it actually denotes the plotting area, including all axis.","Q_Score":75,"Tags":"python,matplotlib","A_Id":55520433,"CreationDate":"2011-04-07T03:00:00.000","Title":"Difference between \"axes\" and \"axis\" in matplotlib?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm confused about what the different between axes and axis is in matplotlib. Could someone please explain in an easy-to-understand way?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":16337,"Q_Id":5575451,"Users Score":68,"Answer":"Axis is the axis of the plot, the thing that gets ticks and tick labels. The axes is the area your plot appears in.","Q_Score":75,"Tags":"python,matplotlib","A_Id":5575468,"CreationDate":"2011-04-07T03:00:00.000","Title":"Difference between \"axes\" and \"axis\" in matplotlib?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a scheme-less database to store roughly 10[TB] of data on disk, ideally, using a python client. The suggested solution should be free for commercial use, and have good performance for reads and writes.\nThe main goal here is to store time-series data, including more than billion records, accessed by time stamp.\nData would be stored in the following scheme: \nKEY --> \"FIELD_NAME.YYYYMMDD.HHMMSS\" \nVALUE --> [v1, v2, v3, v4, v5, v6] (v1..v6 are just floats)\nFor instance, suppose that:\nFIELD_NAME = \"TOMATO\"\nTIME_STAMP = \"20060316.184356\"\nVALUES = [72.34, -22.83, -0.938, 0.265, -2047.23]\nI need to be able to retrieve VALUE (the entire array) given the combination of FIELD_NAME & TIME_STAMP.\nThe query VALUES[\"TOMATO.20060316.184356\"] would return the vector [72.34, -22.83, -0.938, 0.265, -2047.23]. Reads of arrays should be as fast as possible.\nYet, I also need a way to store (in-place) a scalar value within an array . Suppose that I want to assign the 1st element of TOMATO on timestamp 2006\/03\/16.18:43:56 to be 500.867. In such a case, I need to have a fast mechanism to do so -- something like:\nVALUES[\"TOMATO.20060316.184356\"][0] = 500.867 (this would update on disk)\nCan something like MangoDB work? I will be using just one machine (no need for replication etc), running Linux.\nCLARIFICATION: only one machine will be used to store the database. Yet, I need a solution that will allow multiple machines to connect to the same database and update\/insert\/read\/write data to\/from it.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":401,"Q_Id":5590148,"Users Score":4,"Answer":"MongoDB is likely a good choice related to performance, flexibility and usability (easy approachable). However large databases require careful planning - especially when it comes to aspects of backup and high-availability. Without further insight about project requirements there is little to say if one machine is enough or not (look at replica sets and sharding if you need options scale).\nUpdate: based on your new information - should be doable with MongoDB (test and evaluate it). Easiliy spoken: MongoDB can be the \"MySQL\" of the NoSQL databases....if you know about SQL databases then you should be able to work with MongoDB easily since it borrows a lot of ideas and concept from the SQL world. Looking at your data model...it's trivial and data can be easily retrieved and stored (not going into details)..I suggest download MongoDB and walking through the tutorial.","Q_Score":1,"Tags":"python,linux,mongodb,database,nosql","A_Id":5590275,"CreationDate":"2011-04-08T03:38:00.000","Title":"Scheme-less database solution to work on one machine only?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to randomly pick all the results, one by one (no repeats) from itertools.permutations(k)? Or this: how to build a generator of randomized permutations? Something like shuffle(permutations(k)). I\u2019m using Python 2.6.\nYeah, shuffle(r) could be used if r = list(permutations(k)), but such a list will take up too much time and memory when len(k) raises above 10.\nThanks.","AnswerCount":6,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":4221,"Q_Id":5602488,"Users Score":2,"Answer":"I don't know how python implements its shuffle algorithm, but the following scales in linear time, so I don't see why a length of 10 is such a big deal (unless I misunderstand your question?):\n\nstart with the list of items;\ngo through each index in the list in turn, swapping the item at that index it for an item at a random index (including the item itself) in the remainder of the list.\n\nFor a different permutation, just run the same algorithm again.","Q_Score":3,"Tags":"python,permutation,random-sample","A_Id":5602512,"CreationDate":"2011-04-09T02:22:00.000","Title":"Random picks from permutation generator?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to randomly pick all the results, one by one (no repeats) from itertools.permutations(k)? Or this: how to build a generator of randomized permutations? Something like shuffle(permutations(k)). I\u2019m using Python 2.6.\nYeah, shuffle(r) could be used if r = list(permutations(k)), but such a list will take up too much time and memory when len(k) raises above 10.\nThanks.","AnswerCount":6,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":4221,"Q_Id":5602488,"Users Score":2,"Answer":"There's no way of doing what you have asked for without writing your own version of permutations.\nConsider this:\n\nWe have a generator object containing the result of permutations.\nWe have written our own function to tell us the length of the generator.\nWe then pick an entries at random between the beginning of the list and the end.\n\nSince I have a generator if the random function picks an entry near the end of the list the only way to get to it will be to go through all the prior entries and either throw them away, which is bad, or store them in a list, which you have pointed out is problematic when you have a lot of options.\nAre you going to loop through every permutation or use just a few? If it's the latter it would make more sense to generate each new permutation at random and store then ones you've seen before in a set. If you don't use that many the overhead of having to create a new permutation each time you have a collision will be pretty low.","Q_Score":3,"Tags":"python,permutation,random-sample","A_Id":5603207,"CreationDate":"2011-04-09T02:22:00.000","Title":"Random picks from permutation generator?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some complex graphs made using matplotlib. Saving them to a pdf using the savefig command uses a vector format, and the pdf takes ages to open. Is there any way to save the figure to pdf as a raster image to get around this problem?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":5667,"Q_Id":5609969,"Users Score":3,"Answer":"Not that I know, but you can use the 'convert' program (ImageMagick') to convert a jpg to a pdf: `convert file.jpg file.pdf'.","Q_Score":14,"Tags":"python,pdf,matplotlib,raster","A_Id":5610731,"CreationDate":"2011-04-10T05:27:00.000","Title":"How to save figures to pdf as raster images in matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to visualize the data, hopefully make it interactive. Right now I'm using NetworkX and Matplotlib, which maxes out my 8gb when I attempt to 'draw' the graph. I don't know what options and techniques exist for handling such a large cluster** of data. If someone could point me in the right direction, that'd be great. I also have a CUDA enabled GFX card if that could be of use. \nRight now I'm thinking of drawing only the most connected nodes, say top 5% of vertices with the most edges, then filling in less connected nodes as the user zooms or clicks.","AnswerCount":6,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":3571,"Q_Id":5648097,"Users Score":2,"Answer":"You should ask on the official wxPython mailing list. There are people there that can probably help you. I am surprised that matplotlib isn't able to do this though. It may just require you to restructure your code in some way. Right now, the main ways to draw in wxPython are via the various DCs, one of the FloatCanvas widgets or for graphing, wx.Plot or matplotlib.","Q_Score":12,"Tags":"python,graph,wxpython,visualization","A_Id":5651029,"CreationDate":"2011-04-13T11:02:00.000","Title":"Python tools to visualize 100k Vertices and 1M Edges?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to find a tool that will manage a bunch of jobs on 100 machines in a cluster (submit the jobs to the machines; make sure that jobs are run etc).\nWhich tool would be more simple to install \/ manage:\n(1) Hadoop?\n(2) Disco?\n(3) Condor?\nIdeally, I am searching for a solution that would be as simple as possible, yet be robust.\nPython integration is also a plus.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1606,"Q_Id":5682175,"Users Score":3,"Answer":"I'm unfamiliar with Disco and Condor, but I can answer regarding Hadoop:\nHadoop pros: \n\nRobust and proven - probably more than anything else out there. Used by many organizations (including the one I work for) to run clusters of 100s of nodes and more.\nLarge ecosystem = support + many subprojects to make life easier (e.g. Pig, Hive)\nPython support should be possible through the streaming MR feature, or maybe Jython?\n\nHadoop cons:\n\nNeither simple nor elegant (imho). You'll have to spend time learning.","Q_Score":4,"Tags":"python,distributed-computing","A_Id":5682297,"CreationDate":"2011-04-15T20:43:00.000","Title":"Hadoop Vs. Disco Vs. Condor?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to generate compound charts (e.g: Bar+line) from my database using python.\nHow can i do this ?\nThanks in Advance","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":277,"Q_Id":5693151,"Users Score":1,"Answer":"Pretty easy to do with pygooglechart - \nYou can basically follow the bar chart examples that ship with the software and then use the add_data_line method to make the lines on top of the bar chart","Q_Score":0,"Tags":"python,charts","A_Id":6272840,"CreationDate":"2011-04-17T11:09:00.000","Title":"Compoud charts with python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a trained and pickled NLTK tagger (Brill's transformational rule-based tagger).\nI want to use it on GAE. What the best way to do it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":312,"Q_Id":5696995,"Users Score":3,"Answer":"If your NLTK tagger code and data is of limited size, then carry it along with your GAE code. \nIf you have to act upon it to retrain the set, then storing the content of the file as a BLOB in the datastore would be an option, so that you get, analyze, retrain and put.But that will limit size of dataitem to be less than 1 MB because of GAE hardlimit.","Q_Score":0,"Tags":"python,google-app-engine,pickle,nltk","A_Id":5697057,"CreationDate":"2011-04-17T22:49:00.000","Title":"Python: How to load and use trained and pickled NLTK tagger to GAE?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any clean way of setting numpy to use float32 values instead of float64 globally?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":13987,"Q_Id":5721831,"Users Score":13,"Answer":"Not that I am aware of. You either need to specify the dtype explicitly when you call the constructor for any array, or cast an array to float32 (use the ndarray.astype method) before passing it to your GPU code (I take it this is what the question pertains to?). If it is the GPU case you are really worried about, I favor the latter - it can become very annoying to try and keep everything in single precision without an extremely thorough understanding of the numpy broadcasting rules and very carefully designed code. \nAnother alternative might be to create your own methods which overload the standard numpy constructors (so numpy.zeros, numpy.ones, numpy.empty). That should go pretty close to keeping everything in float32.","Q_Score":28,"Tags":"python,numpy,numbers","A_Id":5729858,"CreationDate":"2011-04-19T19:49:00.000","Title":"Python: Making numpy default to float32","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with large, sparse matrices (document-feature matrices generated from text) in python. It's taking quite a bit of processing time and memory to chew through these, and I imagine that sparse matrices could offer some improvements. But I'm worried that using a sparse matrix library is going to make it harder to plug into other python (and R, through rpy2) modules.\nCan people who've crossed this bridge already offer some advice? What are the pros and cons of using sparse matrices in python\/R, in terms of performance, scalability, and compatibility?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":872,"Q_Id":5749479,"Users Score":1,"Answer":"There are several ways to represent sparse matrices (documentation for the R SparseM package reports 20 different ways to store sparse matrix data), so complete compatibility with all solutions is probably out of question. The number options also suggests that there is no best-in-all-situations solution.\nPick either the numpy sparse matrices or R's SparseM (through rpy2) according to where your heavy number crunching routines on those matrices are found (numpy or R).","Q_Score":8,"Tags":"python,r,sparse-matrix","A_Id":5887256,"CreationDate":"2011-04-21T20:32:00.000","Title":"Pros and cons to using sparse matrices in python\/R?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am getting error:\n\nrbuild build packages numpy --no-recurse \n[Tue Apr 26 13:16:53 2011] Creating rMake build job for 1 items\n\nAdded Job 902\n numpy:source=rmake-repository.bericots.com@local:taf32-sandbox-1-devel\/1-0.2\n[13:17:23] Watching job 902 \n[13:17:24] [902] Loading 1 out of 1: numpy:source\n[13:17:26] [902] - State: Loaded \n[13:17:26] [902] - job troves set \n[13:17:27] [902] - State: Building \n[13:17:27] [902] - Building troves \nd all buildreqs: [TroveSpec('python2.4:runtime'), TroveSpec('python2.4:devel')]\n[13:17:27] [902] - numpy:source{x86} - State: Queued\n[13:17:27] [902] - numpy:source{x86} - Ready for dep resolution\n[13:17:28] [902] - numpy:source{x86} - State: Resolving\n[13:17:28] [902] - numpy:source{x86} - Resolving build requirements\n[13:17:28] [902] - State: Failed \n[13:17:28] [902] - Failed while building: Build job had failures:\n * numpy:source: Could not satisfy build requirements: python2.4:runtime=[], python2.4:devel=[]\n\n902 numpy{x86} - Resolving - (Job Failed) ([h]elp)>\nerror: Job 902 has failures, not committing\nerror: Package build failed\n\n\nAny ideas ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":170,"Q_Id":5794093,"Users Score":0,"Answer":"the command $rbuild build packages numpy --no-recurse , should be executed at the level where the package is, not in Development\/QA or Release Directories.","Q_Score":0,"Tags":"python,numpy,centos","A_Id":5796773,"CreationDate":"2011-04-26T17:26:00.000","Title":"How to build numpy with conary recipe for x-86 CentOs","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to calculate confidence interval for the least square fit (scipy.optimize.leastsq) in python?","AnswerCount":3,"Available Count":1,"Score":0.2605204458,"is_accepted":false,"ViewCount":6800,"Q_Id":5811043,"Users Score":4,"Answer":"I am not sure what you mean by confidence interval.\nIn general, leastsq doesn't know much about the function that you are trying to minimize, so it can't really give a confidence interval. However, it does return an estimate of the Hessian, in other word the generalization of 2nd derivatives to multidimensional problems.\nAs hinted in the docstring of the function, you could use that information along with the residuals (the difference between your fitted solution and the actual data) to computed the covariance of parameter estimates, which is a local guess of the confidence interval. \nNote that it is only a local information, and I suspect that you can strictly speaking come to a conclusion only if your objective function is strictly convex. I don't have any proofs or references on that statement :).","Q_Score":6,"Tags":"python,scipy,least-squares,confidence-interval","A_Id":5816755,"CreationDate":"2011-04-27T21:50:00.000","Title":"confidence interval with leastsq fit in scipy python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a histogram I'm drawing in matplotlib with some 260,000 values or so.\nThe problem is that the frequency axis (y axis) on the histogram reaches high numbers such as 100,000... What I'd really like is to have the y labels as thousands, so instead of, for instance:\n\n100000\n75000\n50000\n25000\n0\n\nTo have this:\n\n100\n75\n50\n25\n0\n\nAnd then I can simply change the y axis to \"Frequency (000s)\" -- it makes it much easier to read that way. Anyone with any ideas how that can be achieved?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3743,"Q_Id":5821895,"Users Score":0,"Answer":"Just convert the values yourself before they are entered. In numpy, you can do just array\/1000 instead of array.","Q_Score":3,"Tags":"python,graph,matplotlib,histogram","A_Id":5821933,"CreationDate":"2011-04-28T16:27:00.000","Title":"Matplotlib histogram, frequency as thousands","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Anyone following CUDA will probably have seen a few of my queries regarding a project I'm involved in, but for those who haven't I'll summarize. (Sorry for the long question in advance)\nThree Kernels, One Generates a data set based on some input variables (deals with bit-combinations so can grow exponentially), another solves these generated linear systems, and another reduction kernel to get the final result out. These three kernels are ran over and over again as part of an optimisation algorithm for a particular system.\nOn my dev machine (Geforce 9800GT, running under CUDA 4.0) this works perfectly, all the time, no matter what I throw at it (up to a computational limit based on the stated exponential nature), but on a test machine (4xTesla S1070's, only one used, under CUDA 3.1) the exact same code (Python base, PyCUDA interface to CUDA kernels), produces the exact results for 'small' cases, but in mid-range cases, the solving stage fails on random iterations. \nPrevious problems I've had with this code have been to do with the numeric instability of the problem, and have been deterministic in nature (i.e fails at exactly the same stage every time), but this one is frankly pissing me off, as it will fail whenever it wants to.\nAs such, I don't have a reliable way to breaking the CUDA code out from the Python framework and doing proper debugging, and PyCUDA's debugger support is questionable to say the least.\nI've checked the usual things like pre-kernel-invocation checking of free memory on the device, and occupation calculations say that the grid and block allocations are fine. I'm not doing any crazy 4.0 specific stuff, I'm freeing everything I allocate on the device at each iteration and I've fixed all the data types as being floats.\nTL;DR, Has anyone come across any gotchas regarding CUDA 3.1 that I haven't seen in the release notes, or any issues with PyCUDA's autoinit memory management environment that would cause intermittent launch failures on repeated invocations?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":913,"Q_Id":5827219,"Users Score":-1,"Answer":"You can use nVidia CUDA Profiler and see what gets executed before the failure.","Q_Score":4,"Tags":"python,cuda,gpgpu,pycuda","A_Id":6029955,"CreationDate":"2011-04-29T02:27:00.000","Title":"PyCUDA\/CUDA: Causes of non-deterministic launch failures?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the title said, I'm trying to implement an algorithm that finds out the distances between all pairs of nodes in given graph. But there is more: (Things that might help you)\n\nThe graph is unweighted. Meaning that all the edges can be considered as having weight of 1.\n|E| <= 4*|V|\nThe graph is pretty big (at most ~144 depth)\nThe graph is directed\nThere might be cycles \nI'm writing my code in python (please if you reference algorithms, code would be nice too :))\n\nI know about Johnson's algorithm, Floyd-Warshal, and Dijkstra for all pairs. But these algorithms are good when the graph has weights.\nI was wondering if there is a better algorithm for my case, because those algorithms are intended for weighted graphs.\nThanks!","AnswerCount":10,"Available Count":3,"Score":0.0199973338,"is_accepted":false,"ViewCount":6778,"Q_Id":5851154,"Users Score":1,"Answer":"I would refer you to the following paper: \"Sub-cubic Cost Algorithms for the All Pairs Shortest Path Problem\" by Tadao Takaoka. There a sequential algorithm with sub-cubic complexity for graphs with unit weight (actually max edge weight = O(n ^ 0.624)) is available.","Q_Score":14,"Tags":"python,algorithm,dijkstra,shortest-path,graph-algorithm","A_Id":5852526,"CreationDate":"2011-05-01T20:23:00.000","Title":"best algorithm for finding distance for all pairs where edges' weight is 1","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the title said, I'm trying to implement an algorithm that finds out the distances between all pairs of nodes in given graph. But there is more: (Things that might help you)\n\nThe graph is unweighted. Meaning that all the edges can be considered as having weight of 1.\n|E| <= 4*|V|\nThe graph is pretty big (at most ~144 depth)\nThe graph is directed\nThere might be cycles \nI'm writing my code in python (please if you reference algorithms, code would be nice too :))\n\nI know about Johnson's algorithm, Floyd-Warshal, and Dijkstra for all pairs. But these algorithms are good when the graph has weights.\nI was wondering if there is a better algorithm for my case, because those algorithms are intended for weighted graphs.\nThanks!","AnswerCount":10,"Available Count":3,"Score":0.0199973338,"is_accepted":false,"ViewCount":6778,"Q_Id":5851154,"Users Score":1,"Answer":"I'm assuming the graph is dynamic; otherwise, there's no reason not to use Floyd-Warshall to precompute all-pairs distances on such a small graph ;)\nSuppose you have a grid of points (x, y) with 0 <= x <= n, 0 <= y <= n. Upon removing an edge E: (i, j) <-> (i+1, j), you partition row j into sets A = { (0, j), ..., (i, j) }, B = { (i+1, j), ..., (n, j) } such that points a in A, b in B are forced to route around E - so you need only recompute distance for all pairs (a, b) in (A, B).\nMaybe you can precompute Floyd-Warshall, then, and use something like this to cut recomputation down to O(n^2) (or so) per graph modification...","Q_Score":14,"Tags":"python,algorithm,dijkstra,shortest-path,graph-algorithm","A_Id":6589501,"CreationDate":"2011-05-01T20:23:00.000","Title":"best algorithm for finding distance for all pairs where edges' weight is 1","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the title said, I'm trying to implement an algorithm that finds out the distances between all pairs of nodes in given graph. But there is more: (Things that might help you)\n\nThe graph is unweighted. Meaning that all the edges can be considered as having weight of 1.\n|E| <= 4*|V|\nThe graph is pretty big (at most ~144 depth)\nThe graph is directed\nThere might be cycles \nI'm writing my code in python (please if you reference algorithms, code would be nice too :))\n\nI know about Johnson's algorithm, Floyd-Warshal, and Dijkstra for all pairs. But these algorithms are good when the graph has weights.\nI was wondering if there is a better algorithm for my case, because those algorithms are intended for weighted graphs.\nThanks!","AnswerCount":10,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":6778,"Q_Id":5851154,"Users Score":9,"Answer":"Run a breadth-first search from each node. Total time: O(|V| |E|) = O(|V|2), which is optimal.","Q_Score":14,"Tags":"python,algorithm,dijkstra,shortest-path,graph-algorithm","A_Id":5851436,"CreationDate":"2011-05-01T20:23:00.000","Title":"best algorithm for finding distance for all pairs where edges' weight is 1","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running Python2.7.1 and OpenCV 2.2 without problems in my WinXP laptop and wrote a tracking program that is working without a glitch. But for some strange reason I cannot get the same program to run in any other computer where I tried to install OpenCV and Python (using the same binaries or appropriate 64 bit binaries). In those computers OpenCV seems to be correctly installed (although I have only tested and CaptureFromCamera() in the webcam of the laptop), but CaptureFromFile() return 'None' and give \"error: Array should be CvMat or IplImage\" after a QueryFrame, for example.\nThis simple code:\nimport cv \/\nvideofile = cv.CaptureFromFile('a.avi') \/\nframe = cv.QueryFrame(videofile) \/\nprint type(videofile) \/\nprint type(frame)\nreturns:\ntype 'cv.Capture' \/\ntype 'NoneType'\n\nOpenCV and Python are in the windows PATH...\nI have moved the OpenCV site-packages content back and forth to the Pyhton27 Lib\\Site-packages folder.\nI tried different avi files (just in case it was some CODEC problem). This AVI uses MJPEG encoding (and GSpot reports that ffdshow Video Decoder is used for reading).\nImages work fine (I think): the simple convert code:\nim = cv.LoadImageM(\"c:\\tests\\colormap3.tif\")\ncv.SaveImage(\"c:\\tests\\colormap3-out.png\", im)\nopens, converts and saves the new image...\nI have tested with AVI files in different folders, using \"c:\\\", \"c:\/\", \"c:\\\" and \"c:\/\/\".\n\nI am lost here... Anyone has any idea of what stupid and noob mistake may be the cause of this? Thanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1705,"Q_Id":5858446,"Users Score":0,"Answer":"This must be an issue with the default codecs. OpenCV uses brute force methods to open video files or capture from camera. It goes by trial and error through all sources\/codecs\/apis it can find in some reasonable order. (at least 1.1 did so).\nThat means that on n different systems (or days) you may get n different ways of accessing the same video. The order of multiple webcams for instance, is also non-deterministic and may depend on plugging order or butterflies.\nFind out what your laptop uses, (re)install that on all the systems and retry.\nAlso, in the c version, you can look at the capture's properties\nlook for cvGetCaptureProperty and cvSetCaptureProperty where you might be able to hint to the format.\n[EDIT]\nJust looked i tup in the docs, these functions are also available in Python. Take a look, it should help.","Q_Score":1,"Tags":"opencv,python-2.7,iplimage","A_Id":5859924,"CreationDate":"2011-05-02T14:34:00.000","Title":"IplImage 'None' error on CaptureFromFile() - Python 2.7.1 and OpenCV 2.2 WinXP","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a way to learn to be comfortable with large data sets. I'm a university student, so everything I do is of \"nice\" size and complexity. Working on a research project with a professor this semester, and I've had to visualize relationships between a somewhat large (in my experience) data set. It was a 15 MB CSV file.\nI wrote most of my data wrangling in Python, visualized using GNUPlot.\nAre there any accessible books or websites on the subject out there? Bonus points for using Python, more bonus points for a more \"basic\" visualization system than relying on gnuplot. Cairo or something, I suppose.\nLooking for something that takes me from data mining, to processing, to visualization.\nEDIT: I'm more looking for something that will teach me the \"big ideas\". I can write the code myself, but looking for techniques people use to deal with large data sets. I mean, my 15 MB is small enough where I can put everything I would ever need into memory and just start crunching. What do people do to visualize 5 GB data sets?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2645,"Q_Id":5890935,"Users Score":2,"Answer":"If you are looking for visualization rather than data mining and analysis, The Visual Display of Quantitative Information by Edward Tufte is considered one of the best books in the field.","Q_Score":11,"Tags":"python,dataset,visualization,data-visualization","A_Id":5891605,"CreationDate":"2011-05-04T23:13:00.000","Title":"Acquiring basic skills working with visualizing\/analyzing large data sets","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a way to learn to be comfortable with large data sets. I'm a university student, so everything I do is of \"nice\" size and complexity. Working on a research project with a professor this semester, and I've had to visualize relationships between a somewhat large (in my experience) data set. It was a 15 MB CSV file.\nI wrote most of my data wrangling in Python, visualized using GNUPlot.\nAre there any accessible books or websites on the subject out there? Bonus points for using Python, more bonus points for a more \"basic\" visualization system than relying on gnuplot. Cairo or something, I suppose.\nLooking for something that takes me from data mining, to processing, to visualization.\nEDIT: I'm more looking for something that will teach me the \"big ideas\". I can write the code myself, but looking for techniques people use to deal with large data sets. I mean, my 15 MB is small enough where I can put everything I would ever need into memory and just start crunching. What do people do to visualize 5 GB data sets?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":2645,"Q_Id":5890935,"Users Score":1,"Answer":"I like the book Data Analysis with Open Source Tools by Janert. It is a pretty broad survey of data analysis methods, focusing on how to understand the system that produced the data, rather than on sophisticated statistical methods. One caveat: while the mathematics used isn't especially advanced, I do think you will need to be comfortable with mathematical arguments to gain much from the book.","Q_Score":11,"Tags":"python,dataset,visualization,data-visualization","A_Id":5908938,"CreationDate":"2011-05-04T23:13:00.000","Title":"Acquiring basic skills working with visualizing\/analyzing large data sets","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a rank-1 numpy.array of which I want to make a boxplot. However, I want to exclude all values equal to zero in the array. Currently, I solved this by looping the array and copy the value to a new array if not equal to zero. However, as the array consists of 86 000 000 values and I have to do this multiple times, this takes a lot of patience.\nIs there a more intelligent way to do this?","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":149688,"Q_Id":5927180,"Users Score":0,"Answer":"[i for i in Array if i != 0.0] if the numbers are float\nor [i for i in SICER if i != 0] if the numbers are int.","Q_Score":51,"Tags":"python,arrays,numpy,filtering","A_Id":67419895,"CreationDate":"2011-05-08T11:36:00.000","Title":"How do I remove all zero elements from a NumPy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry for my English in advance. \nI am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.\nWith one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.\nThat is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?\nSome informations :\n - I use pycassa\n - I have separated commitlog repertory and data repertory on differents disks\n - I use batch insert for each thread\n - Consistency Level : ONE\n - Replicator factor : 1","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1686,"Q_Id":5950427,"Users Score":0,"Answer":"It's possible you're hitting the python GIL but more likely you're doing something wrong.\nFor instance, putting 2M rows in a single batch would be Doing It Wrong.","Q_Score":0,"Tags":"python,multithreading,insert,cassandra","A_Id":5950881,"CreationDate":"2011-05-10T13:02:00.000","Title":"Insert performance with Cassandra","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry for my English in advance. \nI am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.\nWith one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.\nThat is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?\nSome informations :\n - I use pycassa\n - I have separated commitlog repertory and data repertory on differents disks\n - I use batch insert for each thread\n - Consistency Level : ONE\n - Replicator factor : 1","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1686,"Q_Id":5950427,"Users Score":0,"Answer":"Try running multiple clients in multiple processes, NOT threads.\nThen experiment with different insert sizes. \n1M inserts in 3 mins is about 5500 inserts\/sec, which is pretty good for a single local client. On a multi-core machine you should be able to get several times this amount provided that you use multiple clients, probably inserting small batches of rows, or individual rows.","Q_Score":0,"Tags":"python,multithreading,insert,cassandra","A_Id":5956519,"CreationDate":"2011-05-10T13:02:00.000","Title":"Insert performance with Cassandra","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry for my English in advance. \nI am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.\nWith one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.\nThat is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?\nSome informations :\n - I use pycassa\n - I have separated commitlog repertory and data repertory on differents disks\n - I use batch insert for each thread\n - Consistency Level : ONE\n - Replicator factor : 1","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1686,"Q_Id":5950427,"Users Score":0,"Answer":"You might consider Redis. Its single-node throughput is supposed to be faster. It's different from Cassandra though, so whether or not it's an appropriate option would depend on your use case.","Q_Score":0,"Tags":"python,multithreading,insert,cassandra","A_Id":6078703,"CreationDate":"2011-05-10T13:02:00.000","Title":"Insert performance with Cassandra","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry for my English in advance. \nI am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.\nWith one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.\nThat is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?\nSome informations :\n - I use pycassa\n - I have separated commitlog repertory and data repertory on differents disks\n - I use batch insert for each thread\n - Consistency Level : ONE\n - Replicator factor : 1","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1686,"Q_Id":5950427,"Users Score":0,"Answer":"The time taken doubled because you inserted twice as much data. Is it possible that you are I\/O bound?","Q_Score":0,"Tags":"python,multithreading,insert,cassandra","A_Id":8491215,"CreationDate":"2011-05-10T13:02:00.000","Title":"Insert performance with Cassandra","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm wondering if C has anything similar to the python pickle module that can dump some structured data on disk and then load it back later. \nI know that I can write my structure byte by byte to a file on disk and then read it back later, but with this approach there's still quite some work to do. For example, if I have a single link list structure, I can traverse the list from head to tail and write each node's data on disk. When I read the list back from the on-disk file, I have to reconstruct all links\nbetween each pair of nodes.\nPlease advise if there's an easier way.\nThanks heaps!","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":207,"Q_Id":5987185,"Users Score":4,"Answer":"An emphatic NO on that one, I'm afraid. C has basic file I\/O. Any structuring of data is up to you. Make up a format, dump it out, read it in.\nThere may be libraries which can do this, but by itself no C doesn't do this.","Q_Score":2,"Tags":"python,c","A_Id":5987204,"CreationDate":"2011-05-13T04:19:00.000","Title":"does C has anything like python pickle for object serialisation?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm wondering if C has anything similar to the python pickle module that can dump some structured data on disk and then load it back later. \nI know that I can write my structure byte by byte to a file on disk and then read it back later, but with this approach there's still quite some work to do. For example, if I have a single link list structure, I can traverse the list from head to tail and write each node's data on disk. When I read the list back from the on-disk file, I have to reconstruct all links\nbetween each pair of nodes.\nPlease advise if there's an easier way.\nThanks heaps!","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":207,"Q_Id":5987185,"Users Score":2,"Answer":"The C library functions fread(3) and fwrite(3) will read and write 'elements of data', but that's pretty fanciful way of saying \"the C library will do some multiplication and pread(2) or pwrite(2) calls behind the scenes to fill your array\".\nYou can use them on structs, but it is probably not a good idea:\n\nholes in the structs get written and read\nyou're baking in the endianness of your integers\n\nWhile you can make your own format for writing objects, you might want to see if your application could use SQLite3 for on-disk storage of objects. It's well-debugged, and if your application fits its abilities well, it might be just the ticket. (And a lot easier than writing all your own formatting code.)","Q_Score":2,"Tags":"python,c","A_Id":5987230,"CreationDate":"2011-05-13T04:19:00.000","Title":"does C has anything like python pickle for object serialisation?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Given a numpy.memmap object created with mode='r' (i.e. read-only), is there a way to force it to purge all loaded pages out of physical RAM, without deleting the object itself?\nIn other words, I'd like the reference to the memmap instance to remain valid, but all physical memory that's being used to cache the on-disk data to be uncommitted. Any views onto to the memmap array must also remain valid.\nI am hoping to use this as a diagnostic tool, to help separate \"real\" memory requirements of a script from \"transient\" requirements induced by the use of memmap.\nI'm using Python 2.7 on RedHat.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":487,"Q_Id":6021550,"Users Score":2,"Answer":"If you run \"pmap SCRIPT-PID\", the \"real\" memory shows as \"[ anon ]\" blocks, and all memory-mapped files show up with the file name in the last column.\nPurging the pages is possible at C level, if you manage to get ahold of the pointer to the beginning of the mapping and call madvise(ptr, length, MADV_DONTNEED) on it, but it's going to be cludgy.","Q_Score":0,"Tags":"python,numpy,mmap,memory-mapped-files,large-data","A_Id":6022144,"CreationDate":"2011-05-16T18:13:00.000","Title":"Purging numpy.memmap","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering whether someone might know the answer to the following.\nI'm using Python to build a character-based suffix tree. There are over 11 million nodes in the tree which fits in to approximately 3GB of memory. This was down from 7GB by using the slot class method rather than the Dict method.\nWhen I serialise the tree (using the highest protocol) the resulting file is more than a hundred times smaller.\nWhen I load the pickled file back in, it again consumes 3GB of memory. Where does this extra overhead come from, is it something to do with Pythons handling of memory references to class instances?\nUpdate\nThank you larsmans and Gurgeh for your very helpful explanations and advice. I'm using the tree as part of an information retrieval interface over a corpus of texts.\nI originally stored the children (max of 30) as a Numpy array, then tried the hardware version (ctypes.py_object*30), the Python array (ArrayType), as well as the dictionary and Set types. \nLists seemed to do better (using guppy to profile the memory, and __slots__['variable',...]), but I'm still trying to squash it down a bit more if I can. The only problem I had with arrays is having to specify their size in advance, which causes a bit of redundancy in terms of nodes with only one child, and I have quite a lot of them. ;-)\nAfter the tree is constructed I intend to convert it to a probabilistic tree with a second pass, but may be I can do this as the tree is constructed. As construction time is not too important in my case, the array.array() sounds like something that would be useful to try, thanks for the tip, really appreciated.\nI'll let you know how it goes.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":415,"Q_Id":6041395,"Users Score":3,"Answer":"Do you construct your tree once and then use it without modifying it further? In that case you might want to consider using separate structures for the dynamic construction and the static usage.\nDicts and objects are very good for dynamic modification, but they are not very space efficient in a read-only scenario. I don't know exactly what you are using your suffix tree for, but you could let each node be represented by a 2-tuple of a sorted array.array('c') and an equally long tuple of subnodes (a tuple instead of a vector to avoid overallocation). You traverse the tree using the bisect-module for lookup in the array. The index of a character in the array will correspond to a subnode in the subnode-tuple. This way you avoid dicts, objects and vector.\nYou could do something similar during the construction process, perhaps using a subnode-vector instead of subnode-tuple. But this will of course make construction slower, since inserting new nodes in a sorted vector is O(N).","Q_Score":10,"Tags":"python,class,serialization,memory-management,pickle","A_Id":6046352,"CreationDate":"2011-05-18T07:42:00.000","Title":"Python memory serialisation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In numpy\/scipy, what's the canonical way to compute the inverse of an upper triangular matrix?\nThe matrix is stored as 2D numpy array with zero sub-diagonal elements, and the result should also be stored as a 2D array.\nedit The best I've found so far is scipy.linalg.solve_triangular(A, np.identity(n)). Is that it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6594,"Q_Id":6042308,"Users Score":7,"Answer":"There really isn't an inversion routine, per se. scipy.linalg.solve is the canonical way of solving a matrix-vector or matrix-matrix equation, and it can be given explicit information about the structure of the matrix which it will use to choose the correct routine (probably the equivalent of BLAS3 dtrsm in this case).\nLAPACK does include doptri for this purpose, and scipy.linalg does expose a raw C lapack interface. If the inverse matrix is really what you want, then you could try using that.","Q_Score":13,"Tags":"python,matrix,numpy,scipy,matrix-inverse","A_Id":6042505,"CreationDate":"2011-05-18T09:09:00.000","Title":"numpy: inverting an upper triangular matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to dump a NumPy array into a CSV file? I have a 2D NumPy array and need to dump it in human-readable format.","AnswerCount":12,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":1040617,"Q_Id":6081008,"Users Score":2,"Answer":"If you want to save your numpy array (e.g. your_array = np.array([[1,2],[3,4]])) to one cell, you could convert it first with your_array.tolist().\nThen save it the normal way to one cell, with delimiter=';'\nand the cell in the csv-file will look like this [[1, 2], [2, 4]]\nThen you could restore your array like this:\nyour_array = np.array(ast.literal_eval(cell_string))","Q_Score":711,"Tags":"python,arrays,csv,numpy","A_Id":40091714,"CreationDate":"2011-05-21T10:01:00.000","Title":"Dump a NumPy array into a csv file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i looking for a example of character recognizing (just one - for example X or A)\nusing MLP, Backward propagation.\nI want a simple example, and not the entire library. Language does not matter, preferably one of those Java, Python, PHP","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":162,"Q_Id":6086560,"Users Score":1,"Answer":"Support Vector Machines tend to work much better for character recognition with error rates around 2% generally reported. I suggest that as an alternative if you're just using the character recognition as a module in a larger project.","Q_Score":0,"Tags":"java,php,python,neural-network","A_Id":6086737,"CreationDate":"2011-05-22T07:07:00.000","Title":"Backward propagation - Character Recognizing - Seeking an example","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an 2d array, A that is 6x6. I would like to take the first 2 values (index 0,0 and 0,1) and take the average of the two and insert the average into a new array that is half the column size of A (6x3) at index 0,0. Then i would get the next two indexes at A, take average and put into the new array at 0,1.\nThe only way I know how to do this is using a double for loop, but for performance purposes (I will be using arrays as big as 3000x3000) I know there is a better solution out there! Thanks!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1976,"Q_Id":6090288,"Users Score":1,"Answer":"I don't think there is a better solution, unless you have some extra information about what's in those arrays. If they're just random numbers, you have to do (n^2)\/2 calculations, and your algorithm is reflecting that, running in O((n^2)\/2).","Q_Score":1,"Tags":"python,arrays,indexing,numpy,slice","A_Id":6090407,"CreationDate":"2011-05-22T19:43:00.000","Title":"python numpy array slicing","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am doing some simple programs with opencv in python. I want to write a few algorithms myself, so need to get at the 'raw' image data inside an image. I can't just do image[i,j] for example, how can I get at the numbers?\nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":7276,"Q_Id":6127314,"Users Score":0,"Answer":"I do not know opencv python bindings, but in C or C++ you have to get the buffer pointer stored in IplImage. This buffer is coded according to the image format (also stored in IplImage). For RGB you have a byte for R, a byte for G, a byte for B, and so on.\nLook at the API of python bindings,you will find how to access the buffer and then you can get to pixel info.\nmy2c","Q_Score":4,"Tags":"python,opencv,iplimage","A_Id":6127643,"CreationDate":"2011-05-25T15:53:00.000","Title":"Opencv... getting at the data in an IPLImage or CvMat","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a moderately large piece (a few thousand lines) of Python\/Numpy\/Scipy code that is throwing up NaNs with certain inputs. I've looked for, and found, some of the usual suspects (log(0) and the like), but none of the obvious ones seem to be the culprits in this case. \nIs there a relatively painless way (i.e., apart from putting exception handling code around each potential culprit), to find out where these NaNs are coming from?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1232,"Q_Id":6213869,"Users Score":3,"Answer":"You can use numpy.seterr to set floating point error handling behaviour globally for all numpy routines. That should let you pinpoint where in the code they are arising from (or a least where numpy see them for the first time).","Q_Score":9,"Tags":"python,debugging,numpy,scipy,nan","A_Id":6213966,"CreationDate":"2011-06-02T11:24:00.000","Title":"Finding the calculation that generates a NaN","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am now trying some stuff with PCA but it's very important for me to know which are the features responsible for each eigenvalue. \nnumpy.linalg.eig gives us the diagonal matrix already sorted but I wanted this matrix with them at the original positions. Does anybody know how I can make it?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1121,"Q_Id":6227589,"Users Score":1,"Answer":"What Sven mentioned in his comments is correct. There is no \"default\" ordering of the eigenvalues. Each eigenvalue is associated with an eigenvector, and it is important is that the eigenvalue-eigenvector pair is matched correctly. You'll find that all languages and packages will do so. \nSo if R gives you eigenvalues [e1,e2,e3 and eigenvectors [v1,v2,v3], python probably will give you (say) [e3,e2,e1] and [v3,v2,v1].\nRecall that an eigenvalue tells you how much of the variance in your data is explained by the eigenvector associated with it. So, a natural sorting of the eigenvalues (that is intuitive to us) that is useful in PCA, is by size (either ascending or descending). That way, you can easily look at the eigenvalues and identify which ones to keep (large, as they explain most of the data) and which ones to throw (small, which could be high frequency features or just noise)","Q_Score":1,"Tags":"python,pca","A_Id":6229101,"CreationDate":"2011-06-03T13:18:00.000","Title":"Non sorted eigenvalues for finding features in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just started to learn to program in Python and I am trying to construct a sparse matrix using Scipy package. I found that there are different types of sparse matrices, but all of them require to store using three vectors like row, col, data; or if you want to each new entry separately, like S(i,j) = s_ij you need to initiate the matrix with a given size.\nMy question is if there is a way to store the matrix entrywise without needing the initial size, like a dictionary.","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":4150,"Q_Id":6310087,"Users Score":2,"Answer":"No. Any matrix in Scipy, sparse or not, must be instantiated with a size.","Q_Score":1,"Tags":"python,scipy,sparse-matrix","A_Id":6312608,"CreationDate":"2011-06-10T17:38:00.000","Title":"sparse matrix from dictionaries","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to load an object (of a custom class Area) from a file using pickler. I'm using python 3.1.\nThe file was made with pickle.dump(area, f)\nI get the following error, and I would like help trying to understand and fix it.\n File \"editIO.py\", line 12, in load\n area = pickle.load(f)\n File \"C:\\Python31\\lib\\pickle.py\", line 1356, in load\n encoding=encoding, errors=errors).load()\nUnicodeDecodeError: 'gbk' codec can't decode bytes in position 0-1: illegal multibyte sequence","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4835,"Q_Id":6320415,"Users Score":2,"Answer":"It's hard to say without you showing your code, but it looks like you opened the file in text mode with a \"gbk\" encoding. It should probably be opened in binary mode. If that doesn't happen, make a small code example that fails, and paste it in here.","Q_Score":1,"Tags":"python,unicode,python-3.x,decode,pickle","A_Id":6320431,"CreationDate":"2011-06-12T05:49:00.000","Title":"UnicodeDecodeError: 'gbk' codec can't decode bytes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python function that generates a list with random values.\nAfter I call this function, I call another function that plots the random values using matplotlib.\nI want to be able to click some key on the keyboard \/ mouse, and have the following happen:\n(1) a new list of random values will be re-generated\n(2) the values from (1) will be plotted (replacing the current matplotlib chart)\nMeaning, I want to be able to view new charts with a click of a button. How do I go about doing this in python?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1009,"Q_Id":6333345,"Users Score":0,"Answer":"from matplotlib.widgets import Button \nreal_points = plt.axes().scatter(x=xpts, y=ypts, alpha=.4, s=size, c='green', label='real data') \n#Reset Button\n#rect = [left, bottom, width, height]\nreset_axis = plt.axes([0.4, 0.15, 0.1, 0.04])\nbutton = Button(ax=reset_axis, label='Reset', color='lightblue' , hovercolor='0.975') \ndef reset(event):\n real_points.remove() \nbutton.on_clicked(reset) \nplt.show()","Q_Score":1,"Tags":"python,matplotlib","A_Id":26627020,"CreationDate":"2011-06-13T16:32:00.000","Title":"python matplotlib -- regenerate graph?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the fastest FFT implementation in Python?\nIt seems numpy.fft and scipy.fftpack both are based on fftpack, and not FFTW. Is fftpack as fast as FFTW? What about using multithreaded FFT, or using distributed (MPI) FFT?","AnswerCount":6,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":27277,"Q_Id":6365623,"Users Score":2,"Answer":"Where I work some researchers have compiled this Fortran library which setups and calls the FFTW for a particular problem. This Fortran library (module with some subroutines) expect some input data (2D lists) from my Python program.\nWhat I did was to create a little C-extension for Python wrapping the Fortran library, where I basically calls \"init\" to setup a FFTW planner, and another function to feed my 2D lists (arrays), and a \"compute\" function.\nCreating a C-extensions is a small task, and there a lot of good tutorials out there for that particular task.\nTo good thing about this approach is that we get speed .. a lot of speed. The only drawback is in the C-extension where we must iterate over the Python list, and extract all the Python data into a memory buffer.","Q_Score":31,"Tags":"python,numpy,scipy,fft,fftw","A_Id":6368360,"CreationDate":"2011-06-15T23:28:00.000","Title":"Improving FFT performance in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a matplotlib figure, how can I make the font size for the tick labels using ax1.set_xticklabels() smaller?\nFurther, how can one rotate it from horizontal to vertical?","AnswerCount":10,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":870128,"Q_Id":6390393,"Users Score":23,"Answer":"In current versions of Matplotlib, you can do axis.set_xticklabels(labels, fontsize='small').","Q_Score":421,"Tags":"python,matplotlib","A_Id":34919615,"CreationDate":"2011-06-17T18:49:00.000","Title":"Matplotlib make tick labels font size smaller","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a matplotlib figure, how can I make the font size for the tick labels using ax1.set_xticklabels() smaller?\nFurther, how can one rotate it from horizontal to vertical?","AnswerCount":10,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":870128,"Q_Id":6390393,"Users Score":16,"Answer":"For smaller font, I use \nax1.set_xticklabels(xticklabels, fontsize=7)\nand it works!","Q_Score":421,"Tags":"python,matplotlib","A_Id":37869225,"CreationDate":"2011-06-17T18:49:00.000","Title":"Matplotlib make tick labels font size smaller","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can't find any documentation on how numpy handles unmapping of previously memory mapped regions: munmap for numpy.memmap() and numpy.load(mmap_mode).\nMy guess is it's done only at garbage collection time, is that correct?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2924,"Q_Id":6397495,"Users Score":15,"Answer":"Yes, it's only closed when the object is garbage-collected; memmap.close method does nothing.\nYou can call x._mmap.close(), but keep in mind that any further access to the x object will crash python.","Q_Score":15,"Tags":"python,numpy,mmap","A_Id":6398543,"CreationDate":"2011-06-18T16:54:00.000","Title":"Unmap of NumPy memmap","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am not sure if this question is correct, but I am asking to resolve the doubts I have. \n\nFor Machine Learning\/Data Mining, we need to learn about data, which means you need to learn Hadoop, which has implementation in Java for MapReduce(correct me if I am wrong). \nHadoop also provides streaming api to support other languages(like python) \nMost grad students\/researchers I know solve ML problems in python \nwe see job posts for hadoop and Java combination very often \n\nI observed that Java and Python(in my observation) are most widely used languages for this domain. \n\nMy question is what is most popular language for working on this domain. \nwhat factors involve in deciding which language\/framework one should choose\nI know both Java and python but confused always :\n\nwhether I start programming in Java(because of hadoop implementation)\nwhether I start programming in Python(because its easier and quicker to write)\n\n\nThis is a very open ended question, I am sure the advices might help me and people who have same doubt.\nThanks a lot in advance","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1400,"Q_Id":6429772,"Users Score":0,"Answer":"Python is gaining in popularity, has a lot of libraries, and is very useful for prototyping. I find that due to the many versions of python and its dependencies on C libs to be difficult to deploy though.\nR is also very popular, has a lot of libraries, and was designed for data science. However, the underlying language design tends to make things overcomplicated.\nPersonally, I prefer Clojure because it has great data manipulation support and can interop with Java ecosystem. The downside of it currently is that there aren't too many data science libraries yet!","Q_Score":2,"Tags":"java,python,hadoop,machine-learning,bigdata","A_Id":44827155,"CreationDate":"2011-06-21T17:54:00.000","Title":"ML\/Data Mining\/Big Data : Popular language for programming and community support","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am not sure if this question is correct, but I am asking to resolve the doubts I have. \n\nFor Machine Learning\/Data Mining, we need to learn about data, which means you need to learn Hadoop, which has implementation in Java for MapReduce(correct me if I am wrong). \nHadoop also provides streaming api to support other languages(like python) \nMost grad students\/researchers I know solve ML problems in python \nwe see job posts for hadoop and Java combination very often \n\nI observed that Java and Python(in my observation) are most widely used languages for this domain. \n\nMy question is what is most popular language for working on this domain. \nwhat factors involve in deciding which language\/framework one should choose\nI know both Java and python but confused always :\n\nwhether I start programming in Java(because of hadoop implementation)\nwhether I start programming in Python(because its easier and quicker to write)\n\n\nThis is a very open ended question, I am sure the advices might help me and people who have same doubt.\nThanks a lot in advance","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1400,"Q_Id":6429772,"Users Score":0,"Answer":"I think in this field most popular combination is Java\/Hadoop. When vacancies requires also python\/perl\/ruby it usually means that they are migrating from those script languages(usually main languages till that time) to java due to moving from startup code base to enterprise. \nAlso in real world data mining application python is frequently used for prototyping, small sized data processing tasks.","Q_Score":2,"Tags":"java,python,hadoop,machine-learning,bigdata","A_Id":6436938,"CreationDate":"2011-06-21T17:54:00.000","Title":"ML\/Data Mining\/Big Data : Popular language for programming and community support","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a list of tuples where each tuple consists of a probability and an item I'd like to sample an item according to its probability. For example, give the list [ (.3, 'a'), (.4, 'b'), (.3, 'c')] I'd like to sample 'b' 40% of the time. \nWhat's the canonical way of doing this in python?\nI've looked at the random module which doesn't seem to have an appropriate function and at numpy.random which although it has a multinomial function doesn't seem to return the results in a nice form for this problem. I'm basically looking for something like mnrnd in matlab.\nMany thanks.\nThanks for all the answers so quickly. To clarify, I'm not looking for explanations of how to write a sampling scheme, but rather to be pointed to an easy way to sample from a multinomial distribution given a set of objects and weights, or to be told that no such function exists in a standard library and so one should write one's own.","AnswerCount":9,"Available Count":2,"Score":0.022218565,"is_accepted":false,"ViewCount":12259,"Q_Id":6432499,"Users Score":1,"Answer":"Howabout creating 3 \"a\", 4 \"b\" and 3 \"c\" in a list an then just randomly select one. With enough iterations you will get the desired probability.","Q_Score":29,"Tags":"python,statistics,numpy,probability,random-sample","A_Id":6432586,"CreationDate":"2011-06-21T21:56:00.000","Title":"How to do weighted random sample of categories in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a list of tuples where each tuple consists of a probability and an item I'd like to sample an item according to its probability. For example, give the list [ (.3, 'a'), (.4, 'b'), (.3, 'c')] I'd like to sample 'b' 40% of the time. \nWhat's the canonical way of doing this in python?\nI've looked at the random module which doesn't seem to have an appropriate function and at numpy.random which although it has a multinomial function doesn't seem to return the results in a nice form for this problem. I'm basically looking for something like mnrnd in matlab.\nMany thanks.\nThanks for all the answers so quickly. To clarify, I'm not looking for explanations of how to write a sampling scheme, but rather to be pointed to an easy way to sample from a multinomial distribution given a set of objects and weights, or to be told that no such function exists in a standard library and so one should write one's own.","AnswerCount":9,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":12259,"Q_Id":6432499,"Users Score":0,"Answer":"I'm not sure if this is the pythonic way of doing what you ask, but you could use\n random.sample(['a','a','a','b','b','b','b','c','c','c'],k)\nwhere k is the number of samples you want. \nFor a more robust method, bisect the unit interval into sections based on the cumulative probability and draw from the uniform distribution (0,1) using random.random(). In this case the subintervals would be (0,.3)(.3,.7)(.7,1). You choose the element based on which subinterval it falls into.","Q_Score":29,"Tags":"python,statistics,numpy,probability,random-sample","A_Id":6432588,"CreationDate":"2011-06-21T21:56:00.000","Title":"How to do weighted random sample of categories in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to do clustering in gensim for a given set of inputs using LDA? How can I go about it?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":13867,"Q_Id":6486738,"Users Score":0,"Answer":"The basic thing to understand here is that clustering requires your data to be present in a format and is not concerned with how did you arrive at your data. So, whether you apply clustering on the term-document matrix or on the reduced-dimension (LDA output matrix), clustering will work irrespective of that.\nJust do the other things right though, small mistakes in data formats can cost you a lot of time of research.","Q_Score":10,"Tags":"python,algorithm,cluster-analysis,latent-semantic-indexing","A_Id":37608130,"CreationDate":"2011-06-26T21:03:00.000","Title":"Clustering using Latent Dirichlet Allocation algo in gensim","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to do clustering in gensim for a given set of inputs using LDA? How can I go about it?","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":13867,"Q_Id":6486738,"Users Score":10,"Answer":"LDA produces a lower dimensional representation of the documents in a corpus. To this low-d representation you could apply a clustering algorithm, e.g. k-means. Since each axis corresponds to a topic, a simpler approach would be assigning each document to the topic onto which its projection is largest.","Q_Score":10,"Tags":"python,algorithm,cluster-analysis,latent-semantic-indexing","A_Id":6525268,"CreationDate":"2011-06-26T21:03:00.000","Title":"Clustering using Latent Dirichlet Allocation algo in gensim","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose that you have a machine that gets fed with real-time stock prices from the exchange. These prices need to be transferred to 50 other machines in your network in the fastest possible way, so that each of them can run its own processing on the data.\nWhat would be the best \/ fastest way to send the data over to the other 50 machines?\nI am looking for a solution that would work on linux, using python as the programming language. Some ideas that I had are:\n(1) Send it to the other machines using python's ZeroMQ module\n(2) Save the data to a shared folder and have the 50 machines read it using NFS\nAny other ideas?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":1280,"Q_Id":6549488,"Users Score":1,"Answer":"I'm pretty sure sending with ZeroMQ will be substantially quicker than saving and loading files.\nThere are other ways to send information over the network, such as raw sockets (lower level), AMQP implementations like RabbitMQ (more structured\/complicated), HTTP requests\/replies, and so on. ZeroMQ is a pretty good option, but it probably depends on your situation.\nYou could also look at frameworks for distributed computing, such as that in IPython.","Q_Score":1,"Tags":"python,linux,zeromq","A_Id":6550992,"CreationDate":"2011-07-01T14:44:00.000","Title":"Distributing Real-Time Market Data Using ZeroMQ \/ NFS?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Suppose that you have a machine that gets fed with real-time stock prices from the exchange. These prices need to be transferred to 50 other machines in your network in the fastest possible way, so that each of them can run its own processing on the data.\nWhat would be the best \/ fastest way to send the data over to the other 50 machines?\nI am looking for a solution that would work on linux, using python as the programming language. Some ideas that I had are:\n(1) Send it to the other machines using python's ZeroMQ module\n(2) Save the data to a shared folder and have the 50 machines read it using NFS\nAny other ideas?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1280,"Q_Id":6549488,"Users Score":1,"Answer":"I would go with zeromq with pub\/sub sockets..\nin your 2 option, your \"clients\" will have to refresh in order to get your file modifications.. like polling.. if you have some write error, you will have to handle this by hand, which won't be easy as well..\nzeromq is simple, reliable and powerful.. i think that perfectly fit your case..","Q_Score":1,"Tags":"python,linux,zeromq","A_Id":6552072,"CreationDate":"2011-07-01T14:44:00.000","Title":"Distributing Real-Time Market Data Using ZeroMQ \/ NFS?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Suppose that you have a machine that gets fed with real-time stock prices from the exchange. These prices need to be transferred to 50 other machines in your network in the fastest possible way, so that each of them can run its own processing on the data.\nWhat would be the best \/ fastest way to send the data over to the other 50 machines?\nI am looking for a solution that would work on linux, using python as the programming language. Some ideas that I had are:\n(1) Send it to the other machines using python's ZeroMQ module\n(2) Save the data to a shared folder and have the 50 machines read it using NFS\nAny other ideas?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1280,"Q_Id":6549488,"Users Score":0,"Answer":"Definatly do NOT use the file system. ZeroMQ is a great solution wiht bindings in Py. I have some examples here: www.coastrd.com. Contact me if you need more help.","Q_Score":1,"Tags":"python,linux,zeromq","A_Id":6643883,"CreationDate":"2011-07-01T14:44:00.000","Title":"Distributing Real-Time Market Data Using ZeroMQ \/ NFS?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm still confused whether to use list or numpy array.\nI started with the latter, but since I have to do a lot of append \nI ended up with many vstacks slowing my code down.\nUsing list would solve this problem, but I also need to delete elements\nwhich again works well with delete on numpy array.\nAs it looks now I'll have to write my own data type (in a compiled language, and wrap).\nI'm just curious if there isn't a way to get the job done using a python type.\nTo summarize this are the criterions my data type would have to fulfil:\n\n2d n (variable) rows, each row k (fixed) elements\nin memory in one piece (would be nice for efficient operating)\nappend row (with an in average constant time, like C++ vector just always k elements)\ndelete a set of elements (best: inplace, keep free space at the end for later append)\naccess element given the row and column index ( O(1) like data[row*k+ column]\n\nIt appears generally useful to me to have a data type like this and not impossible to implement in C\/Fortran.\nWhat would be the closest I could get with python?\n(Or maybe, Do you think it would work to write a python class for the datatype? what performance should I expect in this case?)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":762,"Q_Id":6577657,"Users Score":1,"Answer":"As I see it, if you were doing this in C or Fortran, you'd have to have an idea of the size of the array so that you can allocate the correct amount of memory (ignoring realloc!). So assuming you do know this, why do you need to append to the array?\nIn any case, numpy arrays have the resize method, which you can use to extend the size of the array.","Q_Score":1,"Tags":"python,arrays,performance,numpy","A_Id":6581184,"CreationDate":"2011-07-05T03:04:00.000","Title":"Efficient Datatype Python (list or numpy array?)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I use pyplot.clabel to attach the file names to the lines being plotted?\nplt.clabel(data) line gives the error","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1086,"Q_Id":6577807,"Users Score":0,"Answer":"You may use plt.annotate or plt.text. \nAnd, as an aside, 1) you probably want to use different variables for the file names and numpy arrays you're loading your data into (what is data in data=plb.loadtxt(data)), \n2) you probably want to move the label positioning into the loop (in your code, what is data in the plt.clabel(data)).","Q_Score":0,"Tags":"python,matplotlib","A_Id":6580497,"CreationDate":"2011-07-05T03:49:00.000","Title":"matplotlib.pyplot how to add labels with .clabel?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working with numpy on a 32bit system (Ubuntu 10.04 LTS).\nCan I expect my code to work fluently, in the same manner, on a 64bit (Ubuntu) system?\nDoes numpy have an compatibility issues with 64bit python?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2531,"Q_Id":6585176,"Users Score":4,"Answer":"NumPy has been used on 64-bit systems of all types for years now. I doubt you will find anything new that doesn't show up elsewhere as well.","Q_Score":1,"Tags":"python,numpy","A_Id":6585193,"CreationDate":"2011-07-05T15:29:00.000","Title":"Python: 64bit Numpy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to get reproducible results with the genetic programming code in chapter 11 of \"Programming Collective Intelligence\" by Toby Segaran. However, simply setting seed \"random.seed(55)\" does not appear to work, changing the original code \"from random import ....\" to \"import random\" doesn't help, nor does changing Random(). These all seem to do approximately the same thing, the trees start out building the same, then diverge.\nIn reading various entries about the behavior of random, I can find no reason, given his GP code, why this divergence should happen. There doesn't appear to be anything in the code except calls to random, that has any variability that would account for this behavior. My understanding is that calling random.seed() should set all the calls correctly and since the code isn't threaded at all, I'm not sure how or why the divergence is happening.\nHas anyone modified this code to behave reproducibly? Is there some form of calling random.seed() that may work better?\nI apologize for not posting an example, but the code is obviously not mine (I'm adding only the call to seed and changing how random is called in the code) and this doesn't appear to be a simple issue with random (I've read all the entries on Python random here and many on the web in general).\nThanks.\nMark L.","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":2656,"Q_Id":6614447,"Users Score":5,"Answer":"I had the same problem just now with some completely unrelated code. I believe my solution was similar to that in eryksun's answer, though I didn't have any trees. What I did have were some sets, and I was doing random.choice(list(set)) to pick values from them. Sometimes my results (the items picked) were diverging even with the same seed each time and I was close to pulling my hair out. After seeing eryksun's answer here I tried random.choice(sorted(set)) instead, and the problem appears to have disappeared. I don't know enough about the inner workings of Python to explain it.","Q_Score":7,"Tags":"python","A_Id":9271325,"CreationDate":"2011-07-07T17:12:00.000","Title":"Python random seed not working with Genetic Programming example code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attempting to apply k-means on a set of high-dimensional data points (about 50 dimensions) and was wondering if there are any implementations that find the optimal number of clusters. \nI remember reading somewhere that the way an algorithm generally does this is such that the inter-cluster distance is maximized and intra-cluster distance is minimized but I don't remember where I saw that. It would be great if someone can point me to any resources that discuss this. I am using SciPy for k-means currently but any related library would be fine as well.\nIf there are alternate ways of achieving the same or a better algorithm, please let me know.","AnswerCount":7,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":26563,"Q_Id":6615665,"Users Score":0,"Answer":"If the cluster number is unknow, why not use Hierarchical Clustering instead?\nAt the begining, every isolated one is a cluster, then every two cluster will be merged if their distance is lower than a threshold, the algorithm will end when no more merger goes.\nThe Hierarchical clustering algorithm can carry out a suitable \"K\" for your data.","Q_Score":41,"Tags":"python,machine-learning,data-mining,k-means","A_Id":19444825,"CreationDate":"2011-07-07T18:58:00.000","Title":"Kmeans without knowing the number of clusters?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attempting to apply k-means on a set of high-dimensional data points (about 50 dimensions) and was wondering if there are any implementations that find the optimal number of clusters. \nI remember reading somewhere that the way an algorithm generally does this is such that the inter-cluster distance is maximized and intra-cluster distance is minimized but I don't remember where I saw that. It would be great if someone can point me to any resources that discuss this. I am using SciPy for k-means currently but any related library would be fine as well.\nIf there are alternate ways of achieving the same or a better algorithm, please let me know.","AnswerCount":7,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":26563,"Q_Id":6615665,"Users Score":0,"Answer":"One way to do it is to run k-means with large k (much larger than what you think is the correct number), say 1000. then, running mean-shift algorithm on the these 1000 point (mean shift uses the whole data but you will only \"move\" these 1000 points). mean shift will find the amount of clusters then.\nRunning mean shift without the k-means before is a possibility but it is just too slow usually O(N^2*#steps), so running k-means before will speed things up: O(NK#steps)","Q_Score":41,"Tags":"python,machine-learning,data-mining,k-means","A_Id":33374054,"CreationDate":"2011-07-07T18:58:00.000","Title":"Kmeans without knowing the number of clusters?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm processing some data for a research project, and I'm writing all my scripts in python. I've been using matplotlib to create graphs to present to my supervisor. However, he is a die-hard MATLAB user and he wants me to send him MATLAB .fig files rather than SVG images.\nI've looked all over but can't find anything to do the job. Is there any way to either export .fig files from matplotlib, convert .svg files to .fig, or import .svg files into MATLAB?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2637,"Q_Id":6618132,"Users Score":2,"Answer":"Without access to (or experience with matlab) this is going to be a bit tricky. As Amro stated, .fig files store the underlying data, and not just an image, and you're going to have a hard time saving .fig files from python. There are however a couple of things which might work in your favour, these are: \n\nnumpy\/scipy can read and write matlab .mat files\nthe matplotlib plotting commands are very similar to\/ based on the matlab ones, so the code to generate plots from the data is going to be nearly identical (modulo round\/square brackets and 0\/1 based indexing).\n\nMy approach would be to write your data out as .mat files, and then just put your plotting commands in a script and give that to your supervisor - with any luck it shouldn't be too hard for him to recreate the plots based on that information.\nIf you had access to Matlab to test\/debug, I'm sure it would be possible to create some code which automagically created .mat files and a matlab .m file which would recreate the figures.\nThere's a neat list of matlab\/scipy equivalent commands on the scipy web site.\ngood luck!","Q_Score":4,"Tags":"python,matlab,matplotlib","A_Id":6631872,"CreationDate":"2011-07-07T22:59:00.000","Title":"Any way to get a figure from Python's matplotlib into Matlab?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"INTRODUCTION: I have a list of more than 30,000 integer values ranging from 0 to 47, inclusive, e.g.[0,0,0,0,..,1,1,1,1,...,2,2,2,2,...,47,47,47,...] sampled from some continuous distribution. The values in the list are not necessarily in order, but order doesn't matter for this problem.\nPROBLEM: Based on my distribution I would like to calculate p-value (the probability of seeing greater values) for any given value. For example, as you can see p-value for 0 would be approaching 1 and p-value for higher numbers would be tending to 0.\nI don't know if I am right, but to determine probabilities I think I need to fit my data to a theoretical distribution that is the most suitable to describe my data. I assume that some kind of goodness of fit test is needed to determine the best model.\nIs there a way to implement such an analysis in Python (Scipy or Numpy)?\nCould you present any examples?","AnswerCount":13,"Available Count":1,"Score":0.0461211021,"is_accepted":false,"ViewCount":181959,"Q_Id":6620471,"Users Score":3,"Answer":"What about storing your data in a dictionary where keys would be the numbers between 0 and 47 and values the number of occurrences of their related keys in your original list?\nThus your likelihood p(x) will be the sum of all the values for keys greater than x divided by 30000.","Q_Score":181,"Tags":"python,numpy,statistics,scipy,distribution","A_Id":6620533,"CreationDate":"2011-07-08T06:00:00.000","Title":"Fitting empirical distribution to theoretical ones with Scipy (Python)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a gz file that with a huge size, is it possible to replace the tail without touching the rest of the file? I tried gzip.open( filePath, mode = 'r+' ) but the write method was blocked .... saying it is a read-only object ... any idea? \nwhat I am doing now is... gzip.open as r and once I get the offset of the start of the tail, I close it and re-open it with gzip.open as a and seek (offset)... which is not likely the best idea \nthanks \nJohn","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":350,"Q_Id":6626629,"Users Score":2,"Answer":"Not possible - you can not replace parts of a compressed file without decompressing it first. At least not with the common compression algorithms.","Q_Score":0,"Tags":"python,gzip,tail","A_Id":6626730,"CreationDate":"2011-07-08T15:20:00.000","Title":"replace\/modify tail of a gz file with gzip.open","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a set() with terms like 'A' 'B' 'C'. I want a 2-d associative array so that i can perform an operation like d['A']['B'] += 1 . What is the pythonic way of doing this, I was thinking a dicts of dicts. Is there a better way.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":10309,"Q_Id":6696279,"Users Score":3,"Answer":"Is there any reason not to use a dict of dicts? It does what you want (though note that there's no such thing as ++ in Python), after all.\nThere's nothing stylistically poor or non-Pythonic about using a dict of dicts.","Q_Score":5,"Tags":"python,associative-array","A_Id":6696330,"CreationDate":"2011-07-14T16:02:00.000","Title":"Two dimensional associative array in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I used matplotlib to create some plot, which depends on 8 variables. I would like to study how the plot changes when I change some of them. I created some script that calls the matplotlib one and generates different snapshots that later I convert into a movie, it is not bad, but a bit clumsy. \n\nI wonder if somehow I could interact with the plot regeneration using keyboard keys to increase \/ decrease values of some of the variables and see instantly how the plot changes. \nWhat is the best approach for this? \nAlso if you can point me to interesting links or a link with a plot example with just two sliders?","AnswerCount":6,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":98213,"Q_Id":6697259,"Users Score":2,"Answer":"Use waitforbuttonpress(timeout=0.001) then plot will see your mouse ticks.","Q_Score":70,"Tags":"python,keyboard,matplotlib,interactive","A_Id":11928786,"CreationDate":"2011-07-14T17:10:00.000","Title":"Interactive matplotlib plot with two sliders","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to write a python script (call it parent) that does the following:\n(1) defines a multi-dimensional numpy array\n(2) forks 10 different python scripts (call them children). Each of them must be able to read the contents of the numpy array from (1) at any single point in time (as long as they are alive).\n(3) each of the child scripts will do it's own work (children DO NOT share any info with each other)\n(4) at any point in time, the parent script must be able to accept messages from all of its children. These messages will be parsed by the parent and cause the numpy array from (1) to change.\n\nHow do I go about this, when working in python in a Linux environment? I thought of using zeroMQ and have the parent be a single subscriber while the children will all be publishers; does it make sense or is there a better way for this?\nAlso, how do I allow all the children to continuously read the contents of the numpy array that was defined by the parent ?","AnswerCount":3,"Available Count":1,"Score":-0.1325487884,"is_accepted":false,"ViewCount":14920,"Q_Id":6700149,"Users Score":-2,"Answer":"In ZeroMQ there can only be one publisher per port. The only (ugly) workaround is to start each child PUB socket on a different port and have the parent listen on all those ports. \nbut the pipeline pattern describe on 0MQ, user guide is a much better way to do this.","Q_Score":9,"Tags":"python,zeromq","A_Id":6831300,"CreationDate":"2011-07-14T21:29:00.000","Title":"Python zeromq -- Multiple Publishers To a Single Subscriber?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know there is a lot of questions about Python and OpenCV but I didn't find help on this special topic.\nI want to extract SIFT keypoints from an image in python OpenCV.\nI have recently installed OpenCV 2.3 and can access to SURF and MSER but not SIFT.\nI can't see anything related to SIFT in python modules (cv and cv2) (well I'm lying a bit: there are 2 constants: cv2.SIFT_COMMON_PARAMS_AVERAGE_ANGLE and cv2.SIFT_COMMON_PARAMS_FIRST_ANGLE).\nThis puzzles me since a while.\nIs that related to the fact that some parts of OpenCV are in C and other in C++?\nAny idea?\nP.S.: I have also tried pyopencv (another python binding for OpenCV <= 2.1) without success.","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":12701,"Q_Id":6722736,"Users Score":1,"Answer":"Are you sure OpenCV is allowed to support SIFT? SIFT is a proprietary feature type, patented within the U.S. by the University of British Columbia and by David Lowe, the inventor of the algorithm. In my own research, I have had to re-write this algorithm many times. In fact, some vision researchers try to avoid SIFT and use other scale-invariant models because SIFT is proprietary.","Q_Score":18,"Tags":"python,opencv,sift","A_Id":7152292,"CreationDate":"2011-07-17T08:40:00.000","Title":"OpenCV Python and SIFT features","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for the fastest way to check for the occurrence of NaN (np.nan) in a NumPy array X. np.isnan(X) is out of the question, since it builds a boolean array of shape X.shape, which is potentially gigantic.\nI tried np.nan in X, but that seems not to work because np.nan != np.nan. Is there a fast and memory-efficient way to do this at all?\n(To those who would ask \"how gigantic\": I can't tell. This is input validation for library code.)","AnswerCount":8,"Available Count":2,"Score":0.1243530018,"is_accepted":false,"ViewCount":183568,"Q_Id":6736590,"Users Score":5,"Answer":"use .any()\nif numpy.isnan(myarray).any()\n\nnumpy.isfinite maybe better than isnan for checking\nif not np.isfinite(prop).all()","Q_Score":146,"Tags":"python,performance,numpy,nan","A_Id":46067557,"CreationDate":"2011-07-18T17:10:00.000","Title":"Fast check for NaN in NumPy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for the fastest way to check for the occurrence of NaN (np.nan) in a NumPy array X. np.isnan(X) is out of the question, since it builds a boolean array of shape X.shape, which is potentially gigantic.\nI tried np.nan in X, but that seems not to work because np.nan != np.nan. Is there a fast and memory-efficient way to do this at all?\n(To those who would ask \"how gigantic\": I can't tell. This is input validation for library code.)","AnswerCount":8,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":183568,"Q_Id":6736590,"Users Score":34,"Answer":"I think np.isnan(np.min(X)) should do what you want.","Q_Score":146,"Tags":"python,performance,numpy,nan","A_Id":6736673,"CreationDate":"2011-07-18T17:10:00.000","Title":"Fast check for NaN in NumPy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with a program that writes output to a csv file based on the order that files are read in from a directory. However with a large number of files with the endings 1,2,3,4,5,6,7,8,9,10,11,12. My program actually reads the files by I guess alphabetical ordering: 1,10,11,12....,2,20,21.....99. The problem is that another program assumes that the ordering is in numerical ordering, and skews the graph results.\nThe actually file looks like: String.ext.ext2.1.txt, String.ext.ext2.2.txt, and so on...\nHow can I do this with a python script?","AnswerCount":4,"Available Count":1,"Score":0.2449186624,"is_accepted":false,"ViewCount":34380,"Q_Id":6743407,"Users Score":5,"Answer":"Sort your list of files in the program. Don't rely on operating system calls to give the files in the right order, it depends on the actual file system being used.","Q_Score":9,"Tags":"python,sorting,file-io","A_Id":6743440,"CreationDate":"2011-07-19T07:05:00.000","Title":"How to sort files in a directory before reading?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Matlab, there is something called struct, which allow the user to have a dynamic set of matrices.\nI'm basically looking for a function that allows me to index over dynamic matrices that have different sizes.\nExample: (with 3 matrices)\n\nMatrix 1: 3x2 \nMatrix 2: 2x2 \nMatrix 3: 2x1\n\nBasically I want to store the 3 matrices on the same variable. To called them by their index number afterward (i.e. Matrix[1], Matrx[2]). Conventional python arrays do not allow for arrays with different dimensions to be stacked.\nI was looking into creating classes, but maybe someone her has a better alternative to this.\nThanks","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":4483,"Q_Id":6760380,"Users Score":7,"Answer":"Just use a tuple or list. \nA tuple matrices = tuple(matrix1, matrix2, matrix3) will be slightly more efficient;\nA list matrices = [matrix1, matrix2, matrix3] is more flexible as you can matrix.append(matrix4).\nEither way you can access them as matrices[0] or for matrix in matricies: pass # do stuff.","Q_Score":2,"Tags":"python,matrix","A_Id":6760471,"CreationDate":"2011-07-20T10:22:00.000","Title":"N-Dimensional Matrix Array in Python (with different sizes)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Matlab, there is something called struct, which allow the user to have a dynamic set of matrices.\nI'm basically looking for a function that allows me to index over dynamic matrices that have different sizes.\nExample: (with 3 matrices)\n\nMatrix 1: 3x2 \nMatrix 2: 2x2 \nMatrix 3: 2x1\n\nBasically I want to store the 3 matrices on the same variable. To called them by their index number afterward (i.e. Matrix[1], Matrx[2]). Conventional python arrays do not allow for arrays with different dimensions to be stacked.\nI was looking into creating classes, but maybe someone her has a better alternative to this.\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4483,"Q_Id":6760380,"Users Score":0,"Answer":"Put those arrays into a list.","Q_Score":2,"Tags":"python,matrix","A_Id":6760481,"CreationDate":"2011-07-20T10:22:00.000","Title":"N-Dimensional Matrix Array in Python (with different sizes)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have ported python3 csv module to C# what license could I use for my module? \nShould I distribute my module? \nShould I put PSF copyright in every header of my module?\nthanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":122,"Q_Id":6761201,"Users Score":0,"Answer":"You need to pay a copyright lawyer to tell you that. But my guess is that you need to use the PSF license. Note that PSF does not have the copyright to Python source code. They coders do how that copyright translates into you making a C# port is something only a copyright expert can say. Also note that it is likely to vary from country to country.\nCopyright sucks.","Q_Score":1,"Tags":"python,module,licensing","A_Id":6761407,"CreationDate":"2011-07-20T11:33:00.000","Title":"Ported python3 csv module to C# what license should I use for my module?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've finished gathering my data I plan to use for my corpus, but I'm a bit confused about whether I should normalize the text. I plan to tag & chunk the corpus in the future. Some of NLTK's corpora are all lower case and others aren't.\nCan anyone shed some light on this subject, please?","AnswerCount":1,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":2668,"Q_Id":6767770,"Users Score":9,"Answer":"By \"normalize\" do you just mean making everything lowercase? \nThe decision about whether to lowercase everything is really dependent of what you plan to do. For some purposes, lowercasing everything is better because it lowers the sparsity of the data (uppercase words are rarer and might confuse the system unless you have a massive corpus such that the statistics on capitalized words are decent). In other tasks, case information might be valuable.\nAdditionally, there are other considerations you'll have to make that are similar. For example, should \"can't\" be treated as [\"can't\"], [\"can\", \"'t\"], or [\"ca\", \"n't\"] (I've seen all three in different corpora). What about 7-year-old? Is it one long word? Or three words that should be separated?\nThat said, there's no reason to reformat the corpus. You can just have your code make these changes on the fly. That way the original information is still around later if you ever need it.","Q_Score":6,"Tags":"python,nlp,nltk","A_Id":6767866,"CreationDate":"2011-07-20T20:01:00.000","Title":"NLTK - when to normalize the text?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The goal of this application is produce a system that can generate quizzes automatically. The user should be able to supply any word or phrase they like (e.g. \"Sachin Tendulkar\"); the system will then look for suitable topics online, identify a range of interesting facts, and rephrase them as quiz questions.\nIf I have the sentence \"Sachin was born in year 1973\", how can I rephrase it to \"Which Year was sachin born?\"","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":1334,"Q_Id":6787345,"Users Score":4,"Answer":"In the general case, this is a very hard open research question. However, you might be able to get away with a simple solution a long as your \"facts\" follow a pretty simple grammar. \nYou could write a fairly simple solution by creating a set of transformation rules that act on parse trees. So if you saw a structure that matched the grammar for \"X was Y in Z\" you could transform it to \"Was X Y in Z ?\", etc. Then all you would have to do is parse the fact, transform, and read off the question that is produced.","Q_Score":0,"Tags":"python,nlp,nltk","A_Id":6787446,"CreationDate":"2011-07-22T08:20:00.000","Title":"Quiz Generator using NLTK\/Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Numpy (and Python in general, I suppose), how does one store a slice-index, such as (...,0,:), in order to pass it around and apply it to various arrays? It would be nice to, say, be able to pass a slice-index to and from functions.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":389,"Q_Id":6795657,"Users Score":0,"Answer":"I think you want to just do myslice = slice(1,2) to for example define a slice that will return the 2nd element (i.e. myarray[myslice] == myarray[1:2])","Q_Score":5,"Tags":"python,indexing,numpy,slice","A_Id":6795732,"CreationDate":"2011-07-22T20:20:00.000","Title":"Numpy: arr[...,0,:] works. But how do I store the data contained in the slice command (..., 0, :)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to create a numpy array of N elements, but I want to access the\narray with an offset Noff, i.e. the first element should be at Noff and\nnot at 0. In C this is simple to do with some simple pointer arithmetic, i.e.\nI malloc the array and then define a pointer and shift it appropriately.\nFurthermore, I do not want to allocate N+Noff elements, but only N elements.\nNow for numpy there are many methods that come to my mind:\n(1) define a wrapper function to access the array\n(2) overwrite the [] operator\n(3) etc\nBut what is the fastest method to realize this?\nThanks a lot!\nMark","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":2276,"Q_Id":6800534,"Users Score":2,"Answer":"Use A[n-offset]. this turns offset to offset+len(A) into 0 to len(A).","Q_Score":1,"Tags":"python,numpy","A_Id":6801439,"CreationDate":"2011-07-23T13:03:00.000","Title":"numpy array access","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to create a numpy array of N elements, but I want to access the\narray with an offset Noff, i.e. the first element should be at Noff and\nnot at 0. In C this is simple to do with some simple pointer arithmetic, i.e.\nI malloc the array and then define a pointer and shift it appropriately.\nFurthermore, I do not want to allocate N+Noff elements, but only N elements.\nNow for numpy there are many methods that come to my mind:\n(1) define a wrapper function to access the array\n(2) overwrite the [] operator\n(3) etc\nBut what is the fastest method to realize this?\nThanks a lot!\nMark","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":2276,"Q_Id":6800534,"Users Score":2,"Answer":"I would be very cautious about over-riding the [] operator through the __getitem__() method. Although it will be fine with your own code, I can easily imagine that when the array gets passed to an arbitrary library function, you could get problems. \nFor example, if the function explicitly tried to get all values in the array as A[0:-1], it would maps to A[offset:offset-1], which will be an empty array for any positive or negative value of offset. This may be a little contrived, but it illustrates the general problem.\nTherefore, I would suggest that you create a wrapper function for your own use (as a member function may be most convenient), but don't muck around with __getitem__().","Q_Score":1,"Tags":"python,numpy","A_Id":6812332,"CreationDate":"2011-07-23T13:03:00.000","Title":"numpy array access","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to plot some (x,y) points on the same graph and I don't need any special features at all short of support for polar coordinates which would be nice but not necessary. It's mostly for visualizing my data. Is there a simple way to do this? Matplotlib seems like way more than I need right now. Are there any more basic modules available? What do You recommend?","AnswerCount":8,"Available Count":1,"Score":0.024994793,"is_accepted":false,"ViewCount":69333,"Q_Id":6819653,"Users Score":1,"Answer":"You could always write a plotting function that uses the turtle module from the standard library.","Q_Score":10,"Tags":"python,plot","A_Id":6819725,"CreationDate":"2011-07-25T16:55:00.000","Title":"Plotting points in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using a proprietary Matlab MEX file to import some simulation results in Matlab (no source code available of course!). The interface with Matlab is actually really simple, as there is a single function, returning a Matlab struct. I would like to know if there is any way to call this function in the MEX file directly from Python, without having to use Matlab?\nWhat I have in mind is for example using something like SWIG to import the C function into Python by providing a custom Matlab-wrapper around it...\nBy the way, I know that with scipy.io.loadmat it is already possible to read Matlab binary *.mat data files, but I don't know if the data representation in a mat file is the same as the internal representation in Matlab (in which case it might be useful for the MEX wrapper).\nThe idea would be of course to be able to use the function provided in the MEX with no Matlab installation present on the system.\nThanks.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":7858,"Q_Id":6848790,"Users Score":1,"Answer":"A mex function is an api that allows Matlab (i.e. a matlab program) to call a function written in c\/c++. This function, in turn, can call Matlab own internal functions. As such, the mex function will be linked against Matlab libraries. Thus, to call a mex function directly from a Python program w\/o Matlab libraries doesn't look possible (and doesn't makes sense for that matter). \nOf consideration is why was the mex function created in the first place? Was it to make some non-matlab c libraries (or c code) available to matlab users, or was it to hide some proprietery matlab-code while still making it available to matlab users? If its the first case, then you could request the owners of the mex function to provide it in a non-mex dynamic lib form that you can include in another c or python program. This should be easy if the mex function doesnt depend on Matlab internal functions. \nOthers above have mentioned the matlab compiler... yes, you can include a mex function in a stand alone binary callable from unix (thus from python but as a unix call) if you use the Matlab Compiler to produce such binary. This would require the binary to be deployed along with Matlab's runtime environment. This is not quite the same as calling a function directly from python-- there are no return values for example.","Q_Score":7,"Tags":"python,matlab,mex","A_Id":11173545,"CreationDate":"2011-07-27T17:42:00.000","Title":"Embed a function from a Matlab MEX file directly in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my python environment, the Rpy and Scipy packages are already installed. \nThe problem I want to tackle is such:\n1) A huge set of financial data are stored in a text file. Loading into Excel is not possible\n2) I need to sum a certain fields and get the totals.\n3) I need to show the top 10 rows based on the totals.\nWhich package (Scipy or Rpy) is best suited for this task? \nIf so, could you provide me some pointers (e.g. documentation or online example) that can help me to implement a solution?\nSpeed is a concern. Ideally scipy and Rpy can handle the large files when even when the files are so large that they cannot be fitted into memory","AnswerCount":6,"Available Count":3,"Score":0.0333209931,"is_accepted":false,"ViewCount":3050,"Q_Id":6853923,"Users Score":1,"Answer":"I don't know anything about Rpy. I do know that SciPy is used to do serious number-crunching with truly large data sets, so it should work for your problem.\nAs zephyr noted, you may not need either one; if you just need to keep some running sums, you can probably do it in Python. If it is a CSV file or other common file format, check and see if there is a Python module that will parse it for you, and then write a loop that sums the appropriate values. \nI'm not sure how to get the top ten rows. Can you gather them on the fly as you go, or do you need to compute the sums and then choose the rows? To gather them you might want to use a dictionary to keep track of the current 10 best rows, and use the keys to store the metric you used to rank them (to make it easy to find and toss out a row if another row supersedes it). If you need to find the rows after the computation is done, slurp all the data into a numpy.array, or else just take a second pass through the file to pull out the ten rows.","Q_Score":7,"Tags":"python,r,numpy,scipy,memory-mapped-files","A_Id":6854030,"CreationDate":"2011-07-28T03:53:00.000","Title":"Python: handling a large set of data. Scipy or Rpy? And how?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my python environment, the Rpy and Scipy packages are already installed. \nThe problem I want to tackle is such:\n1) A huge set of financial data are stored in a text file. Loading into Excel is not possible\n2) I need to sum a certain fields and get the totals.\n3) I need to show the top 10 rows based on the totals.\nWhich package (Scipy or Rpy) is best suited for this task? \nIf so, could you provide me some pointers (e.g. documentation or online example) that can help me to implement a solution?\nSpeed is a concern. Ideally scipy and Rpy can handle the large files when even when the files are so large that they cannot be fitted into memory","AnswerCount":6,"Available Count":3,"Score":0.1651404129,"is_accepted":false,"ViewCount":3050,"Q_Id":6853923,"Users Score":5,"Answer":"Neither Rpy or Scipy is necessary, although numpy may make it a bit easier.\nThis problem seems ideally suited to a line-by-line parser.\nSimply open the file, read a row into a string, scan the row into an array (see numpy.fromstring), update your running sums and move to the next line.","Q_Score":7,"Tags":"python,r,numpy,scipy,memory-mapped-files","A_Id":6853981,"CreationDate":"2011-07-28T03:53:00.000","Title":"Python: handling a large set of data. Scipy or Rpy? And how?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my python environment, the Rpy and Scipy packages are already installed. \nThe problem I want to tackle is such:\n1) A huge set of financial data are stored in a text file. Loading into Excel is not possible\n2) I need to sum a certain fields and get the totals.\n3) I need to show the top 10 rows based on the totals.\nWhich package (Scipy or Rpy) is best suited for this task? \nIf so, could you provide me some pointers (e.g. documentation or online example) that can help me to implement a solution?\nSpeed is a concern. Ideally scipy and Rpy can handle the large files when even when the files are so large that they cannot be fitted into memory","AnswerCount":6,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":3050,"Q_Id":6853923,"Users Score":2,"Answer":"As @gsk3 noted, bigmemory is a great package for this, along with the packages biganalytics and bigtabulate (there are more, but these are worth checking out). There's also ff, though that isn't as easy to use.\nCommon to both R and Python is support for HDF5 (see the ncdf4 or NetCDF4 packages in R), which makes it very speedy and easy to access massive data sets on disk. Personally, I primarily use bigmemory, though that's R specific. As HDF5 is available in Python and is very, very fast, it's probably going to be your best bet in Python.","Q_Score":7,"Tags":"python,r,numpy,scipy,memory-mapped-files","A_Id":7559475,"CreationDate":"2011-07-28T03:53:00.000","Title":"Python: handling a large set of data. Scipy or Rpy? And how?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to effeciently remove duplicate rows from relatively large (several hundred MB) CSV files that are not ordered in any meaningful way. Although I have a technique to do this, it is very brute force and I am certain there is a moe elegant and more effecient way.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1090,"Q_Id":6863756,"Users Score":2,"Answer":"In order to remove duplicates you will have to have some sort of memory that tells you if you have seen a line before. Either by remembering the lines or perhaps a checksum of them (which is almost safe...)\nAny solution like that will probably have a \"brute force\" feel to it. \nIf you could have the lines sorted before processing them, then the task is fairly easy as duplicates would be next to each other.","Q_Score":1,"Tags":"python,csv,performance","A_Id":6863816,"CreationDate":"2011-07-28T18:21:00.000","Title":"Effeciently Removing Duplicates from a CSV in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the difference between these two algorithms?","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":11234,"Q_Id":6931209,"Users Score":16,"Answer":"In a use case (5D nearest neighbor look ups in a KDTree with approximately 100K points) cKDTree is around 12x faster than KDTree.","Q_Score":42,"Tags":"python,scipy,kdtree","A_Id":15331547,"CreationDate":"2011-08-03T18:16:00.000","Title":"Difference between scipy.spatial.KDTree and scipy.spatial.cKDTree","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some numpy\/scipy issue. I have a 3D array that represent an ellipsoid in a binary way [ 0 out of the ellipsoid].\nThe thing is I would like to rotate my shape of a certain degree. Do you think it's possible ?\nOr is there an efficient way to write directly the ellipsoid equation with the rotation ?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2379,"Q_Id":6938377,"Users Score":1,"Answer":"Take a look at the command numpy.shape\nI used it once to transpose an array, but I don't know if it might fit your needs. \nCheers!","Q_Score":1,"Tags":"python,arrays,numpy,scipy,rotation","A_Id":6938587,"CreationDate":"2011-08-04T08:32:00.000","Title":"How to rotate a numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some numpy\/scipy issue. I have a 3D array that represent an ellipsoid in a binary way [ 0 out of the ellipsoid].\nThe thing is I would like to rotate my shape of a certain degree. Do you think it's possible ?\nOr is there an efficient way to write directly the ellipsoid equation with the rotation ?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":2379,"Q_Id":6938377,"Users Score":2,"Answer":"Just a short answer. If you need more informations or you don't know how to do it, then I will edit this post and add a small example. \nThe right way to rotate your matrix of data points is to do a matrix multiplication. Your rotation matrix would be probably an n*n-matrix and you have to multiply it with every point. If you have your 3d-matrix you have some thing like i*j*k-points for plotting. This means for your case you have to do it i*j*k-times to find the new points. Maybe you should consider an other matrix for plotting which is just a 2D matrix and just store the plotting points and no zero values. \nThere are some algorithm to calculate faster the results for low valued matrix, but just google for this. \nDid you understood me or do you still have some questions? Sorry for this rough overview.\nBest regards","Q_Score":1,"Tags":"python,arrays,numpy,scipy,rotation","A_Id":6941191,"CreationDate":"2011-08-04T08:32:00.000","Title":"How to rotate a numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to build a heavy duty molecular dynamics simulator. I am wondering if python+numpy is a good choice. This will be used in production, so I wanted to start with a good language. I am wondering if I should rather start with a functional language like eg.scala. Do we have enough library support for scientific computation in scala? Or any other language\/paradigm combination you think is good - and why. If you had actually built something in the past and are talking from experience, please mention it as it will help me with collecting data points.\nthanks much!","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":2217,"Q_Id":6948483,"Users Score":6,"Answer":"I believe that most highly performant MD codes are written in native languages like Fortran, C or C++. Modern GPU programming techniques are also finding favour more recently.\nA language like Python would allow for much more rapid development that native code. The flip side of that is that the performance is typically worse than for compiled native code.\nA question for you. Why are you writing your own MD code? There are many many libraries out there. Can't you find one to suit your needs?","Q_Score":4,"Tags":"python,scala,numpy,simulation,scientific-computing","A_Id":6948576,"CreationDate":"2011-08-04T21:03:00.000","Title":"Best language for Molecular Dynamics Simulator, to be run in production. (Python+Numpy?)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using matplotlib to make a histogram.\nIs there any way to manually set the size of the bins as opposed to the number of bins?","AnswerCount":9,"Available Count":1,"Score":0.1106561105,"is_accepted":false,"ViewCount":355218,"Q_Id":6986986,"Users Score":5,"Answer":"I guess the easy way would be to calculate the minimum and maximum of the data you have, then calculate L = max - min. Then you divide L by the desired bin width (I'm assuming this is what you mean by bin size) and use the ceiling of this value as the number of bins.","Q_Score":185,"Tags":"python,matplotlib,histogram","A_Id":6987109,"CreationDate":"2011-08-08T18:46:00.000","Title":"Bin size in Matplotlib (Histogram)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I managed to plot my data and would like to add a background image (map) to it.\nData is plotted by the long\/lat values and I have the long\/lat values for the image's three corners (top left, top right and bottom left) too.\nI am trying to figure out how to use 'extent' option with imshow. However, the examples I found don't explain how to assign x and y for each corner ( in my case I have the information for three corners).\nHow can I assign the location of three corners for the image when adding it to the plot?\nThanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":80144,"Q_Id":6999621,"Users Score":51,"Answer":"Specify, in the coordinates of your current axis, the corners of the rectangle that you want the image to be pasted over\nExtent defines the left and right limits, and the bottom and top limits. It takes four values like so: extent=[horizontal_min,horizontal_max,vertical_min,vertical_max]. \nAssuming you have longitude along the horizontal axis, then use extent=[longitude_top_left,longitude_top_right,latitude_bottom_left,latitude_top_left]. longitude_top_left and longitude_bottom_left should be the same, latitude_top_left and latitude_top_right should be the same, and the values within these pairs are interchangeable. \nIf your first element of your image should be plotted in the lower left, then use the origin='lower' imshow option as well, otherwise the 'upper' default is what you want.","Q_Score":37,"Tags":"python,plot,matplotlib","A_Id":7000381,"CreationDate":"2011-08-09T16:34:00.000","Title":"how to use 'extent' in matplotlib.pyplot.imshow","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"does there exist an equivalent to matplotlib's imshow()-function for 3D-drawing of datas stored in a 3D numpy array?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":11267,"Q_Id":7011428,"Users Score":0,"Answer":"What you want is a kind of 3D image (a block). Maybe you could plot it by slices (using imshow() or whatever the tool you want). \nMaybe you could tell us what kind of plot you want?","Q_Score":8,"Tags":"python,numpy,matplotlib","A_Id":7011948,"CreationDate":"2011-08-10T13:20:00.000","Title":"imshow for 3D? (Python \/ Matplotlib)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am reading in census data using the matplotlib cvs2rec function - works fine gives me a nice ndarray. \nBut there are several columns where all the values are '\"none\"\" with dtype |04. This is cuasing problems when I lode into Atpy \"TypeError: object of NoneType has no len()\". Something like '9999' or other missing would work for me. Mask is not going to work in this case because I am passing the real array to ATPY and it will not convert MASK. The Put function in numpy will not work with none values wich is the best way to change values(I think). I think some sort of boolean array is the way to go but I can't get it to work. \nSo what is a good\/fast way to change none values and\/or uninitialized numpy array to something like '9999'or other recode. No Masking. \nThanks,\nMatthew","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2140,"Q_Id":7011591,"Users Score":0,"Answer":"you can use mask array when you do calculation. and when pass the array to ATPY, you can call filled(9999) method of the mask array to convert the mask array to normal array with invalid values replaced by 9999.","Q_Score":2,"Tags":"python,arrays,numpy,missing-data","A_Id":7046562,"CreationDate":"2011-08-10T13:30:00.000","Title":"Recode missing data Numpy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a big script in Python. I inspired myself in other people's code so I ended up using the numpy.random module for some things (for example for creating an array of random numbers taken from a binomial distribution) and in other places I use the module random.random.\nCan someone please tell me the major differences between the two?\nLooking at the doc webpage for each of the two it seems to me that numpy.random just has more methods, but I am unclear about how the generation of the random numbers is different.\nThe reason why I am asking is because I need to seed my main program for debugging purposes. But it doesn't work unless I use the same random number generator in all the modules that I am importing, is this correct?\nAlso, I read here, in another post, a discussion about NOT using numpy.random.seed(), but I didn't really understand why this was such a bad idea. I would really appreciate if someone explain me why this is the case.","AnswerCount":4,"Available Count":1,"Score":0.1488850336,"is_accepted":false,"ViewCount":53810,"Q_Id":7029993,"Users Score":3,"Answer":"The source of the seed and the distribution profile used are going to affect the outputs - if you are looking for cryptgraphic randomness, seeding from os.urandom() will get nearly real random bytes from device chatter (ie ethernet or disk) (ie \/dev\/random on BSD)\nthis will avoid you giving a seed and so generating determinisitic random numbers. However the random calls then allow you to fit the numbers to a distribution (what I call scientific random ness - eventually all you want is a bell curve distribution of random numbers, numpy is best at delviering this.\nSO yes, stick with one generator, but decide what random you want - random, but defitniely from a distrubtuion curve, or as random as you can get without a quantum device.","Q_Score":117,"Tags":"python,random,random-seed","A_Id":7030943,"CreationDate":"2011-08-11T17:05:00.000","Title":"Differences between numpy.random and random.random in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"there are huge number of data, there are various groups. i want to check whether the new data fits in any group and if it does i want to put that data into that group. If datum doesn't fit to any of the group, i want to create new group. So, i want to use linked list for the purpose or is there any other way to doing so??\nP.S. i have way to check the similarity between data and group representative(lets not go that in deatil for now) but i dont know how to add the data to group (each group may be list) or create new one if required. i guess what i needis linked list implementation in python, isn't it?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":468,"Q_Id":7067726,"Users Score":7,"Answer":"This sounds like a perfect use for a dictionary.","Q_Score":0,"Tags":"python,linked-list","A_Id":7067801,"CreationDate":"2011-08-15T16:30:00.000","Title":"linked list in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a web application in Django and I'm looking to generate dynamic graphs based on the data.\nPreviously I was using the Google Image Charts, but I ran into significant limitations with the api, including the URL length constraint.\nI've switched to using matplotlib to create my charts. I'm wondering if this will be a bad decision for future scaling - do any production sites use matplotlib? It takes about 100ms to generate a single graph (much longer than querying the database for the data), is this simply out of the question in terms of scaling\/handling multiple requests?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":228,"Q_Id":7078010,"Users Score":0,"Answer":"If performance is such an issue and you don't need fancy graphs, you may be able to get by with not creating images at all. Render explicitly sized and colored divs for a simple bar chart in html. Apply box-shadow and\/or a gradient background for eye candy.\nI did this in some report web pages, displaying a small 5-bar (quintiles) chart in each row of a large table, with huge speed and almost no server load. The users love it for the early and succinct feedback.\nUsing canvas and javascript you could improve on this scheme for other chart types. I don't know if you could use Google's charting code for this without going through their API, but a few lines or circle segments should be easy to paint yourselves if not.","Q_Score":0,"Tags":"graphics,python","A_Id":7078262,"CreationDate":"2011-08-15T02:07:00.000","Title":"Generating dynamic graphs","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I spent the past couple of days trying to get opencv to work with my Python 2.7 install. I kept getting an error saying that opencv module was not found whenever I try \"import cv\".\nI then decided to try installing opencv using Macports, but that didn't work.\nNext, I tried Homebrew, but that didn't work either.\nEventually, I discovered I should modify the PYTHONPATH as such:\nexport PYTHONPATH=\"\/usr\/local\/lib\/python2.6\/site-packages\/:$PYTHONPATH\"\nMy problem is that I didn't find \/usr\/local\/lib\/python2.*...etc\nThe folder simply doesn't exist\nSo my question is this:\nHow do I properly install Python on OS X Snow Leopard for it to work with opencv? \nThanks a lot,","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":7935,"Q_Id":7128761,"Users Score":2,"Answer":"You need to install the module using your python2.7 installation. Pointing your PYTHONPATH at stuff installed under 2.6 to run under 2.7 is a Bad Idea. \nDepending on how you want to install it, do something like python2.7 setup.py or easy_install-2.7 opencv to install.\nfwiw, on OS X the modules are usually installed under \/System\/Library\/Frameworks\/Python.framework\/ but you should almost never need to know where anything installed in your site packages is physically located; if Python can't find them without help you've installed them wrong.","Q_Score":4,"Tags":"python,macos,opencv,homebrew","A_Id":7129002,"CreationDate":"2011-08-20T00:39:00.000","Title":"How to properly install Python on OSX for use with OpenCV?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have many sets of medical image sequences of the artery on the heart. Each set of sequenced medical images shows the position of the artery as the heart pumps. Each set is taken from different views and has different amount of images taken.\nI want to do a temporal interpolation based on time (i was told that the time could be represented by the frame number. however i have no idea what the frame number is or what does it refer to. Could you enlighten me please?) I have two options: by interpolating the whole image frame or interpolating artery junction positions (coordinates). How do i go about doing both options?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":7162351,"Users Score":0,"Answer":"If you imagine each of the images as a still photo the frame number would be a sequence number that shows what order the images should be displayed in to produce a movie from the stills. If the images are stored in an array it would be the array index of the individul frame in question.","Q_Score":0,"Tags":"python,interpolation","A_Id":7162436,"CreationDate":"2011-08-23T14:09:00.000","Title":"temporal interpolation of artery angiogram images sequences","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I work in an psudo-operational environment where we make new imagery on receipt of data. Sometimes when new data comes in, we need to re-open an image and update that image in order to create composites, add overlays, etc. In addition to adding to the image, this requires modification of titles, legends, etc. \nIs there something built into matplotlib that would let me store and reload my matplotlib.pyplot object for later use? It would need to maintain access to all associated objects including figures, lines, legends, etc. Maybe pickle is what I'm looking for, but I doubt it.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":43605,"Q_Id":7290370,"Users Score":1,"Answer":"I produced figures for a number of papers using matplotlib. Rather than thinking of saving the figure (as in MATLAB), I would write a script that plotted the data then formatted and saved the figure. In cases where I wanted to keep a local copy of the data (especially if I wanted to be able to play with it again) I found numpy.savez() and numpy.load() to be very useful.\nAt first I missed the shrink-wrapped feel of saving a figure in MATLAB, but after a while I have come to prefer this approach because it includes the data in a format that is available for further analysis.","Q_Score":50,"Tags":"python,matplotlib","A_Id":8822734,"CreationDate":"2011-09-03T00:27:00.000","Title":"Store and reload matplotlib.pyplot object","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I work in an psudo-operational environment where we make new imagery on receipt of data. Sometimes when new data comes in, we need to re-open an image and update that image in order to create composites, add overlays, etc. In addition to adding to the image, this requires modification of titles, legends, etc. \nIs there something built into matplotlib that would let me store and reload my matplotlib.pyplot object for later use? It would need to maintain access to all associated objects including figures, lines, legends, etc. Maybe pickle is what I'm looking for, but I doubt it.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":43605,"Q_Id":7290370,"Users Score":0,"Answer":"Did you try the pickle module? It serialises an object, dumps it to a file, and can reload it from the file later.","Q_Score":50,"Tags":"python,matplotlib","A_Id":7843630,"CreationDate":"2011-09-03T00:27:00.000","Title":"Store and reload matplotlib.pyplot object","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a scientific program in Python and C with some complex physical simulation algorithms. After implementing algorithm, I found that there are a lot of possible optimizations to improve performance. Common ones are precalculating values, getting calculations out of cycle, replacing simple matrix algorithms with more complex and other. But there arises a problem. Unoptimized algorithm is much slower, but its logic and connection with theory look much clearer and readable. Also, it's harder to extend and modify optimized algorithm. \nSo, the question is - what techniques should I use to keep readability while improving performance? Now I am trying to keep both fast and clear branches and develop them in parallel, but maybe there are better methods?","AnswerCount":4,"Available Count":1,"Score":0.1488850336,"is_accepted":false,"ViewCount":434,"Q_Id":7300903,"Users Score":3,"Answer":"Yours is a very good question that arises in almost every piece of code, however simple or complex, that's written by any programmer who wants to call himself a pro. \nI try to remember and keep in mind that a reader newly come to my code has pretty much the same crude view of the problem and the same straightforward (maybe brute force) approach that I originally had. Then, as I get a deeper understanding of the problem and paths to the solution become clearer, I try to write comments that reflect that better understanding. I sometimes succeed and those comments help readers and, especially, they help me when I come back to the code six weeks later. My style is to write plenty of comments anyway and, when I don't (because: a sudden insight gets me excited; I want to see it run; my brain is fried), I almost always greatly regret it later.\nIt would be great if I could maintain two parallel code streams: the na\u00efve way and the more sophisticated optimized way. But I have never succeeded in that.\nTo me, the bottom line is that if I can write clear, complete, succinct, accurate and up-to-date comments, that's about the best I can do. \nJust one more thing that you know already: optimization usually doesn't mean shoehorning a ton of code onto one source line, perhaps by calling a function whose argument is another function whose argument is another function whose argument is yet another function. I know that some do this to avoid storing a function's value temporarily. But it does very little (usually nothing) to speed up the code and it's a bitch to follow. No news to you, I know.","Q_Score":18,"Tags":"python,performance,algorithm,optimization,code-readability","A_Id":7301095,"CreationDate":"2011-09-04T17:25:00.000","Title":"Preserve code readability while optimising","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Anyone know if OpenCV is capable of loading a multi-frame TIFF stack? \nI'm using OpenCV 2.2.0 with python 2.6.","AnswerCount":6,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":12017,"Q_Id":7335308,"Users Score":8,"Answer":"Unfortunately OpenCV does not support TIFF directories and is able to read only the first frame from multi-frame TIFF files.","Q_Score":10,"Tags":"python,image,opencv","A_Id":7337353,"CreationDate":"2011-09-07T14:10:00.000","Title":"Can I load a multi-frame TIFF through OpenCV?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have about 100 CSV files I have to operate on once a month and I was trying to wrap my head around this but I'm running into a wall. I'm starting to understand some things about Python, but combining several things is still giving me issues, so I can't figure this out.\nHere's my problem:\nI have many CSV files, and here's what I need done:\nadd a \"column\" to the front of each row (or the back, doesn't matter really, but front is ideal). In addition, each line has 5 rows (not counting the filename that will be added), and here's the format:\n6-digit ID number,YYYY-MM-DD(1),YYYY-MM-DD(2),YYYY-MM-DD(3),1-2-digit number\nI need to subtract YYYY-MM-DD(3) from YYYY-MM-DD(2) for every line in the file (there is no header row), for every CSV in a given directory.\nI need the filename inside the row because I will combine the files (which, if is included in the script would be awesome, but I think I can figure that part out), and I need to know what file the records came from. Format of filename is always '4-5-digit-number.csv'\nI hope this makes sense, if it does not, please let me know. I'm kind of stumped as to where to even begin, so I don't have any sample code that even really began to work for me. Really frustrated, so I appreciate any help you guys may provide, this site rocks!\nMylan","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2947,"Q_Id":7350851,"Users Score":0,"Answer":"The basic outline of the program is going to be like this:\n\nUse the os module to get the filenames out of the directory\/directories of interest\nRead in each file one at a time\nFor each line in the file, split it into columns with columns = line.split(\",\")\nUse datetime.date to convert strings like \"2011-05-03\" to datetime.dates.\nSubtract the third date from the second, which yields a datetime.timedelta.\nPut all your information in the format you want (hint: str(foo) yields a string representation of foo, for just about any type) and remember it for later\nClose your file, reopen it for writing, and write your new stuff in","Q_Score":2,"Tags":"python,csv,datestamp","A_Id":7351024,"CreationDate":"2011-09-08T15:46:00.000","Title":"Need to do a math operation on every line in several CSV files in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have two sq matrix (a, b) of size in order of 100000 X 100000. I have to take difference of these two matrix (c = a-b). Resultant matrix 'c' is a sparse matrix. I want to find the indices of all non-zero elements. I have to do this operation many times (>100).\nSimplest way is to use two for loops. But that's computationally intensive. Can you tell me any algorithm or package\/library preferably in R\/python\/c to do this as quickly as possible?","AnswerCount":6,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6600,"Q_Id":7361447,"Users Score":4,"Answer":"Since you have two dense matrices then the double for loop is the only option you have. You don't need a sparse matrix class at all since you only want to know the list of indices (i,j) for which a[i,j] != b[i,j].\nIn languages like R and Python the double for loop will perform poorly. I'd probably write this in native code for a double for loop and add the indices to a list object. But no doubt the wizards of interpreted code (i.e. R, Python etc.) know efficient ways to do it without resorting to native coding.","Q_Score":6,"Tags":"python,algorithm,r,sparse-matrix,indices","A_Id":7362256,"CreationDate":"2011-09-09T12:12:00.000","Title":"How to find indices of non zero elements in large sparse matrix?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to graph alarm counts in Python to give some sort of display to give an idea of the peak amount of network elements down between two timespans. The way that our alarms report handles it is in CSV like this:\nName,Alarm Start,Alarm Clear\nNE1,15:42 08\/09\/11,15:56 08\/09\/11\nNE2,15:42 08\/09\/11,15:57 08\/09\/11\nNE3,15:42 08\/09\/11,16:31 08\/09\/11\nNE4,15:42 08\/09\/11,15:59 08\/09\/11\n\nI am trying to graph the start and end between those two points and how many NE's were down during that time, including the maximum number and when it went under or over a certain count. An example is below:\n15:42 08\/09\/11 - 4 Down\n15:56 08\/09\/11 - 3 Down\netc.\nAny advice where to start on this would be great. Thanks in advance, you guys and gals have been a big help in the past.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":52,"Q_Id":7363997,"Users Score":1,"Answer":"I'd start by parsing your indata to a map indexed by dates with counts as values. Just increase the count for each row with the same date you encounter.\nAfter that, use some plotting module, for instance matplotlib to plot the keys of the map versus the values. That should cover it! \nDo you need any more detailed ideas?","Q_Score":1,"Tags":"python,graph","A_Id":7364602,"CreationDate":"2011-09-09T15:28:00.000","Title":"Graphing the number of elements down based on timestamps start\/end","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well.\nRequirements\nThe key aspect of this particular program I must write is that it must: \n\nRead through a large corpus (between 5G and 30G, and potentially larger stuff down the line)\nProcess the data on each line.\nFrom this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors.\nThese vectors must all be saved to disk in some format or other.\n\nSteps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4)\nParenthesis: Technical Details\nIn case the actual procedure for building vectors affects the solution:\nFor each line in the corpus, one or more vectors must have its basis weights updated. \nIf you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index). \nVectors do not depend on each other, nor does it matter which order the corpus lines are read in.\nAttempted Solutions\nThere are three extrema when it comes to how to do this:\n\nI could build all the vectors in memory. Then write them to disk.\nI could build all the vectors directly on the disk, using shelf of pickle or some such library.\nI could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector.\n\nAll these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons.\nGoals\nA good solution would involve:\n\nBuilding as much as possible in memory.\nOnce memory is full, dump everything to disk.\nIf bits are needed from disk again, recover them back into memory to add stuff to those vectors. \nGo back to 1 until all vectors are built.\n\nThe problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing.\nQuestion\nDoes anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it?\nMany thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way.\nAdditional Details\nThe sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (\u00e0 la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus.\nPart of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available?","AnswerCount":11,"Available Count":4,"Score":0.0363476168,"is_accepted":false,"ViewCount":3167,"Q_Id":7381258,"Users Score":2,"Answer":"Two ideas:\n\nUse numpy arrays to represent vectors. They are much more memory-efficient, at the cost that they will force elements of the vector to be of the same type (all ints or all doubles...).\nDo multiple passes, each with a different set of vectors. That is, choose first 1M vectors and do only the calculations involving them (you said they are independent, so I assume this is viable). Then another pass over all the data with second 1M vectors.\n\nIt seems you're on the edge of what you can do with your hardware. It would help if you could describe what hardware (mostly, RAM) is available to you for this task. If there are 100k vectors, each of them with 1M ints, this gives ~370GB. If multiple passes method is viable and you've got a machine with 16GB RAM, then it is about ~25 passes -- should be easy to parallelize if you've got a cluster.","Q_Score":17,"Tags":"python,memory,io","A_Id":7381424,"CreationDate":"2011-09-11T21:08:00.000","Title":"Minimising reading from and writing to disk in Python for a memory-heavy operation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well.\nRequirements\nThe key aspect of this particular program I must write is that it must: \n\nRead through a large corpus (between 5G and 30G, and potentially larger stuff down the line)\nProcess the data on each line.\nFrom this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors.\nThese vectors must all be saved to disk in some format or other.\n\nSteps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4)\nParenthesis: Technical Details\nIn case the actual procedure for building vectors affects the solution:\nFor each line in the corpus, one or more vectors must have its basis weights updated. \nIf you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index). \nVectors do not depend on each other, nor does it matter which order the corpus lines are read in.\nAttempted Solutions\nThere are three extrema when it comes to how to do this:\n\nI could build all the vectors in memory. Then write them to disk.\nI could build all the vectors directly on the disk, using shelf of pickle or some such library.\nI could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector.\n\nAll these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons.\nGoals\nA good solution would involve:\n\nBuilding as much as possible in memory.\nOnce memory is full, dump everything to disk.\nIf bits are needed from disk again, recover them back into memory to add stuff to those vectors. \nGo back to 1 until all vectors are built.\n\nThe problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing.\nQuestion\nDoes anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it?\nMany thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way.\nAdditional Details\nThe sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (\u00e0 la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus.\nPart of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available?","AnswerCount":11,"Available Count":4,"Score":0.0181798149,"is_accepted":false,"ViewCount":3167,"Q_Id":7381258,"Users Score":1,"Answer":"Use a database. That problem seems large enough that language choice (Python, Perl, Java, etc) won't make a difference. If each dimension of the vector is a column in the table, adding some indexes is probably a good idea. In any case this is a lot of data and won't process terribly quickly.","Q_Score":17,"Tags":"python,memory,io","A_Id":7381462,"CreationDate":"2011-09-11T21:08:00.000","Title":"Minimising reading from and writing to disk in Python for a memory-heavy operation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well.\nRequirements\nThe key aspect of this particular program I must write is that it must: \n\nRead through a large corpus (between 5G and 30G, and potentially larger stuff down the line)\nProcess the data on each line.\nFrom this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors.\nThese vectors must all be saved to disk in some format or other.\n\nSteps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4)\nParenthesis: Technical Details\nIn case the actual procedure for building vectors affects the solution:\nFor each line in the corpus, one or more vectors must have its basis weights updated. \nIf you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index). \nVectors do not depend on each other, nor does it matter which order the corpus lines are read in.\nAttempted Solutions\nThere are three extrema when it comes to how to do this:\n\nI could build all the vectors in memory. Then write them to disk.\nI could build all the vectors directly on the disk, using shelf of pickle or some such library.\nI could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector.\n\nAll these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons.\nGoals\nA good solution would involve:\n\nBuilding as much as possible in memory.\nOnce memory is full, dump everything to disk.\nIf bits are needed from disk again, recover them back into memory to add stuff to those vectors. \nGo back to 1 until all vectors are built.\n\nThe problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing.\nQuestion\nDoes anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it?\nMany thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way.\nAdditional Details\nThe sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (\u00e0 la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus.\nPart of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available?","AnswerCount":11,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":3167,"Q_Id":7381258,"Users Score":0,"Answer":"Split the corpus evenly in size between parallel jobs (one per core) - process in parallel, ignoring any incomplete line (or if you cannot tell if it is incomplete, ignore the first and last line of that each job processes).\nThat's the map part.\nUse one job to merge the 20+ sets of vectors from each of the earlier jobs - That's the reduce step.\nYou stand to loose information from 2*N lines where N is the number of parallel processes, but you gain by not adding complicated logic to try and capture these lines for processing.","Q_Score":17,"Tags":"python,memory,io","A_Id":7433853,"CreationDate":"2011-09-11T21:08:00.000","Title":"Minimising reading from and writing to disk in Python for a memory-heavy operation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background\nI am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well.\nRequirements\nThe key aspect of this particular program I must write is that it must: \n\nRead through a large corpus (between 5G and 30G, and potentially larger stuff down the line)\nProcess the data on each line.\nFrom this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors.\nThese vectors must all be saved to disk in some format or other.\n\nSteps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4)\nParenthesis: Technical Details\nIn case the actual procedure for building vectors affects the solution:\nFor each line in the corpus, one or more vectors must have its basis weights updated. \nIf you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index). \nVectors do not depend on each other, nor does it matter which order the corpus lines are read in.\nAttempted Solutions\nThere are three extrema when it comes to how to do this:\n\nI could build all the vectors in memory. Then write them to disk.\nI could build all the vectors directly on the disk, using shelf of pickle or some such library.\nI could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector.\n\nAll these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons.\nGoals\nA good solution would involve:\n\nBuilding as much as possible in memory.\nOnce memory is full, dump everything to disk.\nIf bits are needed from disk again, recover them back into memory to add stuff to those vectors. \nGo back to 1 until all vectors are built.\n\nThe problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing.\nQuestion\nDoes anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it?\nMany thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way.\nAdditional Details\nThe sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (\u00e0 la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus.\nPart of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available?","AnswerCount":11,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":3167,"Q_Id":7381258,"Users Score":0,"Answer":"From another comment I infer that your corpus fits into the memory, and you have some cores to throw at the problem, so I would try this:\n\nFind a method to have your corpus in memory. This might be a sort of ram disk with file system, or a database. No idea, which one is best for you. \nHave a smallish shell script monitor ram usage, and spawn every second another process of the following, as long as there is x memory left (or, if you want to make things a bit more complex, y I\/O bandwith to disk):\n\niterate through the corpus and build and write some vectors\n\nin the end you can collect and combine all vectors, if needed (this would be the reduce part)","Q_Score":17,"Tags":"python,memory,io","A_Id":7381527,"CreationDate":"2011-09-11T21:08:00.000","Title":"Minimising reading from and writing to disk in Python for a memory-heavy operation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to train multiple one class SVMs in different threads.\nDoes anybody know if scikit's SVM releases the GIL?\nI did not find any answers online.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1679,"Q_Id":7391427,"Users Score":4,"Answer":"Some sklearn Cython classes do release the GIL internally on performance critical sections, for instance the decision trees (used in random forests for instance) as of 0.15 (to be released early 2014) and the libsvm wrappers do.\nThis is not the general rule though. If you identify performance critical cython code in sklearn that could be changed to release the GIL please feel free to send a pull request.","Q_Score":5,"Tags":"python,multithreading,parallel-processing,machine-learning,scikit-learn","A_Id":20999120,"CreationDate":"2011-09-12T17:10:00.000","Title":"Does Scikit-learn release the python GIL?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am going to implement Naive Bayes classifier with Python and classify e-mails as Spam or Not spam. I have a very sparse and long dataset with many entries. Each entry is like the following:\n1 9:3 94:1 109:1 163:1 405:1 406:1 415:2 416:1 435:3 436:3 437:4 ...\nWhere 1 is label (spam, not spam), and each pair corresponds to a word and its frequency. E.g. 9:3 corresponds to the word 9 and it occurs 3 times in this e-mail sample.\nI need to read this dataset and store it in a structure. Since it's a very big and sparse dataset, I'm looking for a neat data structure to store the following variables:\n\nthe index of each e-mail\nlabel of it (1 or -1)\nword and it's frequency per each e-mail\nI also need to create a corpus of all words and their frequency with the label information\n\nAny suggestions for such a data structure?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":381,"Q_Id":7452917,"Users Score":0,"Answer":"If you assume you didn't care about multiple occurrences of each word in an email, then all you really need to know is (that is, your features are booleans):\nFor each feature, what is the count of positive associations and negative associations?\nYou can do this online very easily in one pass, keeping track of just those two numbers for each feature.\nThe non-boolean features means you'll have to discretize the features some how, but you aren't really asking about how to do that.","Q_Score":1,"Tags":"python,data-structures,machine-learning,spam-prevention","A_Id":7453107,"CreationDate":"2011-09-17T06:22:00.000","Title":"I need a neat data structure suggestion to store a very large dataset (to train Naive Bayes in Python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a good implementation for logistic regression (not regularized) in Python. I'm looking for a package that can also get weights for each vector. Can anyone suggest a good implementation \/ package?\nThanks!","AnswerCount":5,"Available Count":1,"Score":-1.0,"is_accepted":false,"ViewCount":23971,"Q_Id":7513067,"Users Score":-5,"Answer":"Do you know Numpy? If no, take a look also to Scipy and matplotlib.","Q_Score":16,"Tags":"python,regression","A_Id":7513167,"CreationDate":"2011-09-22T10:08:00.000","Title":"Weighted logistic regression in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For the purpose of conducting a psychological experiment I have to divide a set of pictures (240) described by 4 features (real numbers) into 3 subsets with equal number of elements in each subset (240\/3 = 80) in such a way that all subsets are approximately balanced with respect to these features (in terms of mean and standard deviation).\nCan anybody suggest an algorithm to automate that? Are there any packages\/modules in Python or R that I could use to do that? Where should I start?","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":9742,"Q_Id":7539186,"Users Score":2,"Answer":"I would tackle this as follows:\n\nDivide into 3 equal subsets.\nFigure out the mean and variance of each subset. From them construct an \"unevenness\" measure.\nCompare each pair of elements, if swapping would reduce the \"unevenness\", swap them. Continue until there are either no more pairs to compare, or the total unevenness is below some arbitrary \"good enough\" threshold.","Q_Score":5,"Tags":"python,algorithm,r","A_Id":7539484,"CreationDate":"2011-09-24T13:02:00.000","Title":"Divide set into subsets with equal number of elements","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For the purpose of conducting a psychological experiment I have to divide a set of pictures (240) described by 4 features (real numbers) into 3 subsets with equal number of elements in each subset (240\/3 = 80) in such a way that all subsets are approximately balanced with respect to these features (in terms of mean and standard deviation).\nCan anybody suggest an algorithm to automate that? Are there any packages\/modules in Python or R that I could use to do that? Where should I start?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":9742,"Q_Id":7539186,"Users Score":1,"Answer":"In case you are still interested in the exhaustive search question. You have 240 choose 80 possibilities to choose the first set and then another 160 choose 80 for the second set, at which point the third set is fixed. In total, this gives you:\n120554865392512357302183080835497490140793598233424724482217950647 * 92045125813734238026462263037378063990076729140\nClearly, this is not an option :)","Q_Score":5,"Tags":"python,algorithm,r","A_Id":7548438,"CreationDate":"2011-09-24T13:02:00.000","Title":"Divide set into subsets with equal number of elements","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using matplotlib.pyplot (with Eclipse on Windows). Every time I run my code it opens several pyplot figure windows. \nThe problem is that if I don't close those windows manually they accumulate. I would like to use pyplot to find those windows (opened by another process of python.exe) and re-use them. In other words, I do not want to have multiple windows for the same figure, even across interpreter processes.\nIs there a simple way to do it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":302,"Q_Id":7655323,"Users Score":1,"Answer":"There is no simple way to reuse plot windows if you must use eclipse to run it. When I am working interactively with matplotlib, I use either spyder or ipython. Edit class, reload class, and run code again. If you just want to get rid of all the open plot windows, hit the stacked stop icons to kill all your runing python instances.","Q_Score":3,"Tags":"python,matplotlib","A_Id":7673206,"CreationDate":"2011-10-04T23:56:00.000","Title":"Pyplot\/Matplotlib: How to access figures opened by another interpreter?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to code a Maximum Likelihood Estimator to estimate the mean and variance of some toy data. I have a vector with 100 samples, created with numpy.random.randn(100). The data should have zero mean and unit variance Gaussian distribution.\nI checked Wikipedia and some extra sources, but I am a little bit confused since I don't have a statistics background.\nIs there any pseudo code for a maximum likelihood estimator? I get the intuition of MLE but I cannot figure out where to start coding.\nWiki says taking argmax of log-likelihood. What I understand is: I need to calculate log-likelihood by using different parameters and then I'll take the parameters which gave the maximum probability. What I don't get is: where will I find the parameters in the first place? If I randomly try different mean & variance to get a high probability, when should I stop trying?","AnswerCount":4,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":47657,"Q_Id":7718034,"Users Score":4,"Answer":"You need a numerical optimisation procedure. Not sure if anything is implemented in Python, but if it is then it'll be in numpy or scipy and friends.\nLook for things like 'the Nelder-Mead algorithm', or 'BFGS'. If all else fails, use Rpy and call the R function 'optim()'.\nThese functions work by searching the function space and trying to work out where the maximum is. Imagine trying to find the top of a hill in fog. You might just try always heading up the steepest way. Or you could send some friends off with radios and GPS units and do a bit of surveying. Either method could lead you to a false summit, so you often need to do this a few times, starting from different points. Otherwise you may think the south summit is the highest when there's a massive north summit overshadowing it.","Q_Score":28,"Tags":"python,statistics,machine-learning,pseudocode","A_Id":7725290,"CreationDate":"2011-10-10T20:05:00.000","Title":"Maximum Likelihood Estimate pseudocode","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I generate a uniformly distributed [-1,1]^d data in Python? E.g. d is a dimension like 10.\nI know how to generate uniformly distributed data like np.random.randn(N) but dimension thing is confused me a lot.","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":13886,"Q_Id":7733969,"Users Score":2,"Answer":"You can import the random module and call random.random to get a random sample from [0, 1). You can double that and subtract 1 to get a sample from [-1, 1). \nDraw d values this way and the tuple will be a uniform draw from the cube [-1, 1)^d.","Q_Score":7,"Tags":"python,numpy,machine-learning,scipy","A_Id":7734072,"CreationDate":"2011-10-12T00:15:00.000","Title":"Uniformly distributed data in d dimensions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a complex Python program. I'm trying to use the multiprocess Pool to parallelize it. I get the error message\nPicklingError: Can't pickle : attribute lookup __builtin__.function failed.\nThe traceback shows the statemen return send(obj)\nMy hypothesis is that its the \"obj\" that is causing the problem and that I need to make it pickle-able. \nHow can I determine which object is the cause of the problem? The program is complex and simply guessing might take a long time.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":7746484,"Users Score":0,"Answer":"The error you're seeing could be caused by passing the wrong kind of function to the multiprocessing.Pool methods. The passed function must be directly importable from its parent module. It cannot be a method of a class, for instance.","Q_Score":0,"Tags":"python,multiprocessing","A_Id":7746882,"CreationDate":"2011-10-12T20:57:00.000","Title":"in Python using the multiprocessing module, how can I determine which object caused a PicklingError?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I implement graph colouring in python using adjacency matrix? Is it possible? I implemented it using list. But it has some problems. I want to implement it using matrix. Can anybody give me the answer or suggestions to this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1943,"Q_Id":7758913,"Users Score":0,"Answer":"Implementing using adjacency is somewhat easier than using lists, as lists take a longer time and space. igraph has a quick method neighbors which can be used. However, with adjacency matrix alone, we can come up with our own graph coloring version which may not result in using minimum chromatic number. A quick strategy may be as follows: \nInitalize: Put one distinct color for nodes on each row (where a 1 appears) \nStart: With highest degree node (HDN) row as a reference, compare each row (meaning each node) with the HDN and see if it is also its neighbor by detecting a 1. If yes, then change that nodes color. Proceed like this to fine-tune. O(N^2) approach! Hope this helps.","Q_Score":2,"Tags":"python,graph-algorithm","A_Id":37692584,"CreationDate":"2011-10-13T18:44:00.000","Title":"Graph colouring in python using adjacency matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have a 16 core machine, and an embarrassingly parallel program. I use lots of numpy dot products and addition of numpy arrays, and if I did not use multiprocessing it would be a no-brainer: Make sure numpy is built against a version of blas that uses multithreading. However, I am using multiprocessing, and all cores are working hard at all times. In this case, is there any benefit to be had from using a multithreading blas?\nMost of the operations are (blas) type 1, some are type 2.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":2321,"Q_Id":7761859,"Users Score":2,"Answer":"If you are already using multiprocessing, and all cores are at max load, then there will be very little, if any, benefit to adding threads that will be waiting around for a processor.\nDepending on your algorithm and what you're doing, it may be more beneficial to use one type over the other, but that's very dependent.","Q_Score":5,"Tags":"python,multithreading,numpy,multiprocessing,blas","A_Id":7761877,"CreationDate":"2011-10-14T00:13:00.000","Title":"Is it worth using a multithreaded blas implementation along with multiprocessing in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have a 16 core machine, and an embarrassingly parallel program. I use lots of numpy dot products and addition of numpy arrays, and if I did not use multiprocessing it would be a no-brainer: Make sure numpy is built against a version of blas that uses multithreading. However, I am using multiprocessing, and all cores are working hard at all times. In this case, is there any benefit to be had from using a multithreading blas?\nMost of the operations are (blas) type 1, some are type 2.","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":2321,"Q_Id":7761859,"Users Score":6,"Answer":"You might need to be a little careful about the assumption that your code is actually used multithreaded BLAS calls. Relatively few numpy operators actually use the underlying BLAS, and relatively few BLAS calls are actually multithreaded. numpy.dot uses either BLAS dot, gemv or gemm, depending on the operation, but of those, only gemm is usually multithreaded, because there is rarely any performance benefit for the O(N) and O(N^2) BLAS calls in doing so. If you are limiting yourself to Level 1 and Level 2 BLAS operations, I doubt you are actually using any multithreaded BLAS calls, even if you are using a numpy implementation built with a mulithreaded BLAS, like Atlas or MKL.","Q_Score":5,"Tags":"python,multithreading,numpy,multiprocessing,blas","A_Id":7765829,"CreationDate":"2011-10-14T00:13:00.000","Title":"Is it worth using a multithreaded blas implementation along with multiprocessing in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Numpy and trying to search for a function to list out the variables along with their sizes (both the matrix dimensions as well as memory usage). \nI am essentially looking for an equivalent of the \"whos\" command in MATLAB and Octave. Does there exist any such command in NumPy?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":21664,"Q_Id":7774964,"Users Score":8,"Answer":"Python has a builtin function dir() which returns the list of names in the current local scope.","Q_Score":22,"Tags":"python,matlab,numpy,octave","A_Id":7779373,"CreationDate":"2011-10-15T00:42:00.000","Title":"Equivalent of \"whos\" command in NumPy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Numpy and trying to search for a function to list out the variables along with their sizes (both the matrix dimensions as well as memory usage). \nI am essentially looking for an equivalent of the \"whos\" command in MATLAB and Octave. Does there exist any such command in NumPy?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":21664,"Q_Id":7774964,"Users Score":0,"Answer":"try using: type(VAR_NAME) \nthis will output the class type for that particular variable, VAR_NAME","Q_Score":22,"Tags":"python,matlab,numpy,octave","A_Id":60568982,"CreationDate":"2011-10-15T00:42:00.000","Title":"Equivalent of \"whos\" command in NumPy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Enthought Python Distribution, I saw it includes pyhdf and numpy. Since it includes pyhdf, does it also include HDF 4?\nI am using pylab to code at this moment.\nBecause I want to use a module of the pyhdf package called pyhdf.SD. And it Prerequisites HDF 4 library. So do I still need to install HDF 4 if I want to use pyhdf.SD?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":258,"Q_Id":7786232,"Users Score":1,"Answer":"For the versions of EPD that include pyhdf, you don't need to install HDF 4 separately. However, note that pyhdf is not included in all versions of EPD---in particular, it's not included in the 64-bit Windows EPD or the 64-bit OS X EPD, though it is in the 32-bit versions.","Q_Score":1,"Tags":"python,image-processing","A_Id":7786451,"CreationDate":"2011-10-16T18:07:00.000","Title":"does Enthought Python distribution includes pyhdf and HDF 4","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some integer matrices of moderate size (a few hundred rows). I need to solve equations of the form Ax = b where b is a standard basis vector and A is one of my matrices. I have been using numpy.linalg.lstsq for this purpose, but the rounding errors end up being too significant. \nHow can I carry out an exact symbolic computation?\n(PS I don't really need the code to be efficient; I'm more concerned about ease of coding.)","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":403,"Q_Id":7787732,"Users Score":1,"Answer":"Note that if you're serious about your comment that you require your solution vector to be integer, then you're looking for something called the \"integer least squares problem\". Which is believed to be NP-hard. There are some heuristic solvers, but it all gets very complicated.","Q_Score":1,"Tags":"python,numpy","A_Id":7787997,"CreationDate":"2011-10-16T22:18:00.000","Title":"How do I do matrix computations in python without rounding?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to store\/retrieve a large number of images to use in my program.\nEach image is small: an icon 50x50, and each one has associated a string which is the path the icon is related to.\nSince they are so small I was thinking if there is some library which allows to store all of them in a single file. \nI would need to store both the image and the path string.\nI don't know if pickle is a possible choice - I also heard about much more complicated libraries such as HDF5...\nthanks for your help!\nalessandro","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":165,"Q_Id":7846413,"Users Score":0,"Answer":"You could pickle a dict that associates filenames to byte strings of RGBA data.\nAssuming you have loaded the image with PIL, make sure they have all the same size and color format. Build a dict with images[filename] = im.tostring() and dump() it with pickle. Use Image.fromstring with the right size and mode parameters to get it back.","Q_Score":2,"Tags":"python,database","A_Id":7847894,"CreationDate":"2011-10-21T07:41:00.000","Title":"python: container to memorize a large number of images","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I call random.sample(arr,length)\nan error returns random_sample() takes at most 1 positional argument (2 given). After some Googling I found out I'm calling Numpy's random sample function when I want to call the sample function of the random module. I've tried importing numpy under a different name, which doesn't fix the problem. I need Numpy for the rest of the program, though.\nAny thoughts? Thanks","AnswerCount":4,"Available Count":1,"Score":0.2449186624,"is_accepted":false,"ViewCount":4375,"Q_Id":7855845,"Users Score":5,"Answer":"This shouldn't happen. Check your code for bad imports like from numpy import *.","Q_Score":5,"Tags":"python,numpy,random-sample","A_Id":7855863,"CreationDate":"2011-10-21T22:09:00.000","Title":"Python's random module made inaccessible by Numpy's random module","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently asked about accessing data from SPSS and got some absolutely wonderful help here. I now have an almost identical need to read data from a Confirmit data file. Not finding a ton of confirmit data file format on the web. It appears that Confirmit can export to SPSS *.sav files. This might be one avenue for me. Here's the exact needs:\nI need to be able to extract two different but related types of info from a market research study done using ConfirmIt:\n\nI need to be able to discover the data \"schema\", as in what questions are being asked (the text of the questions) and what the type of the answer is (multiple choice, yes\/no, text) and what text labels are associated with each answer.\nI need to be able to read respondents answers and populate my data model. So for each of the questions discovered as part of step 1 above, I need to build a table of respondent answers.\n\nWith SPSS this was easy thanks to a data access module available freely available by IBM and a nice Python wrapper by Albert-Jan Roskam. Googling I'm not finding much info. Any insight into this is helpful. Something like a Python or Java class to read the confirmit data would be perfect!\nAssuming my best option ends up being to export to SPSS *.sav file, does anyone know if it will meet both of my use cases above (contain the questions, answers schema and also contain each participant's results)?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":768,"Q_Id":7856725,"Users Score":0,"Answer":"I was recently given a data set from confirmit. There are almost 4000 columns in the excel file. I want to enter it into a mysql db. There is not way they are just doing that output from one table. Do you know how the table schema works for confirmit?","Q_Score":2,"Tags":"python,spss","A_Id":11026071,"CreationDate":"2011-10-22T00:57:00.000","Title":"Anyone familiar with data format of Comfirmit?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently asked about accessing data from SPSS and got some absolutely wonderful help here. I now have an almost identical need to read data from a Confirmit data file. Not finding a ton of confirmit data file format on the web. It appears that Confirmit can export to SPSS *.sav files. This might be one avenue for me. Here's the exact needs:\nI need to be able to extract two different but related types of info from a market research study done using ConfirmIt:\n\nI need to be able to discover the data \"schema\", as in what questions are being asked (the text of the questions) and what the type of the answer is (multiple choice, yes\/no, text) and what text labels are associated with each answer.\nI need to be able to read respondents answers and populate my data model. So for each of the questions discovered as part of step 1 above, I need to build a table of respondent answers.\n\nWith SPSS this was easy thanks to a data access module available freely available by IBM and a nice Python wrapper by Albert-Jan Roskam. Googling I'm not finding much info. Any insight into this is helpful. Something like a Python or Java class to read the confirmit data would be perfect!\nAssuming my best option ends up being to export to SPSS *.sav file, does anyone know if it will meet both of my use cases above (contain the questions, answers schema and also contain each participant's results)?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":768,"Q_Id":7856725,"Users Score":0,"Answer":"You can get the data schema from Excel definition export from Confirmit\nYou can export from Confirmit txt file with the same template","Q_Score":2,"Tags":"python,spss","A_Id":10631838,"CreationDate":"2011-10-22T00:57:00.000","Title":"Anyone familiar with data format of Comfirmit?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I using matplotlib to plot some data in python and the plots require a standard colour bar. The data consists of a series of NxM matrices containing frequency information so that a simple imshow() plot gives a 2D histogram with colour describing frequency. Each matrix contains data in different, but overlapping ranges. Imshow normalizes the data in each matrix to the range 0-1 which means that, for example, the plot of matrix A, will appear identical to the plot of the matrix 2*A (though the colour bar will show double the values). What I would like is for the colour red, for example, to correspond to the same frequency in all of the plots. In other words, a single colour bar would suffice for all the plots. Any suggestions would be greatly appreciated.","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":46914,"Q_Id":7875688,"Users Score":9,"Answer":"Easiest solution is to call clim(lower_limit, upper_limit) with the same arguments for each plot.","Q_Score":38,"Tags":"python,matplotlib,colorbar","A_Id":7878155,"CreationDate":"2011-10-24T12:34:00.000","Title":"How can I create a standard colorbar for a series of plots in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There seems to be many choices for Python to interface with SQLite (sqlite3, atpy) and HDF5 (h5py, pyTables) -- I wonder if anyone has experience using these together with numpy arrays or data tables (structured\/record arrays), and which of these most seamlessly integrate with \"scientific\" modules (numpy, scipy) for each data format (SQLite and HDF5).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3647,"Q_Id":7883646,"Users Score":23,"Answer":"Most of it depends on your use case. \nI have a lot more experience dealing with the various HDF5-based methods than traditional relational databases, so I can't comment too much on SQLite libraries for python...\nAt least as far as h5py vs pyTables, they both offer very seamless access via numpy arrays, but they're oriented towards very different use cases.\nIf you have n-dimensional data that you want to quickly access an arbitrary index-based slice of, then it's much more simple to use h5py. If you have data that's more table-like, and you want to query it, then pyTables is a much better option.\nh5py is a relatively \"vanilla\" wrapper around the HDF5 libraries compared to pyTables. This is a very good thing if you're going to be regularly accessing your HDF file from another language (pyTables adds some extra metadata). h5py can do a lot, but for some use cases (e.g. what pyTables does) you're going to need to spend more time tweaking things. \npyTables has some really nice features. However, if your data doesn't look much like a table, then it's probably not the best option.\nTo give a more concrete example, I work a lot with fairly large (tens of GB) 3 and 4 dimensional arrays of data. They're homogenous arrays of floats, ints, uint8s, etc. I usually want to access a small subset of the entire dataset. h5py makes this very simple, and does a fairly good job of auto-guessing a reasonable chunk size. Grabbing an arbitrary chunk or slice from disk is much, much faster than for a simple memmapped file. (Emphasis on arbitrary... Obviously, if you want to grab an entire \"X\" slice, then a C-ordered memmapped array is impossible to beat, as all the data in an \"X\" slice are adjacent on disk.) \nAs a counter example, my wife collects data from a wide array of sensors that sample at minute to second intervals over several years. She needs to store and run arbitrary querys (and relatively simple calculations) on her data. pyTables makes this use case very easy and fast, and still has some advantages over traditional relational databases. (Particularly in terms of disk usage and speed at which a large (index-based) chunk of data can be read into memory)","Q_Score":12,"Tags":"python,sqlite,numpy,scipy,hdf5","A_Id":7891137,"CreationDate":"2011-10-25T01:06:00.000","Title":"exporting from\/importing to numpy, scipy in SQLite and HDF5 formats","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a java application I need to use a specific image processing algorithm that is currently implemented in python.\nWhat would be the best approach, knowing that this script uses the Numpy library ?\nI alreayd tried to compile the script to java using the jythonc compiler but it seems that it doesn't support dependencies on native libraries like Numpy.\nI also tried to use Jepp but I get an ImportError when importing Numpy, too.\nAny suggestion ?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2255,"Q_Id":7891586,"Users Score":3,"Answer":"If you're using Numpy you probably have to just use C Python, as it's a compiled extension. I'd recommend saving the image to disk, perhaps as a temporary file, and then calling the Python as a subprocess. If you're dealing with binary data you could even try memory mapping the data in Java and passing in in the path to the subprocess.\nAlternatively, depending on your circumstances, you could set up a simple data processing server in Python which accepts requests and returns processed data.","Q_Score":1,"Tags":"java,python,numpy,jython","A_Id":7892106,"CreationDate":"2011-10-25T15:19:00.000","Title":"Run python script (with numpy dependency) from java","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The most common SQLite interface I've seen in Python is sqlite3, but is there anything that works well with NumPy arrays or recarrays? By that I mean one that recognizes data types and does not require inserting row by row, and extracts into a NumPy (rec)array...? Kind of like R's SQL functions in the RDB or sqldf libraries, if anyone is familiar with those (they import\/export\/append whole tables or subsets of tables to or from R data tables).","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":7905,"Q_Id":7901853,"Users Score":1,"Answer":"This looks a bit older but is there any reason you cannot just do a fetchall() instead of iterating and then just initializing numpy on declaration?","Q_Score":6,"Tags":"python,arrays,sqlite,numpy,scipy","A_Id":12100118,"CreationDate":"2011-10-26T11:15:00.000","Title":"NumPy arrays with SQLite","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use openCV to detect when a person raises or lowers a hand or both hands. I have looked through the tutorials provided by python opencv and none of them seem to do the job. There is a camera that sits in front of the 2 persons, about 50cm away from them(so you see them from the waist up). The person is able to raise or lower each arm, or both of the arms and I have to detect when they do that.(the camera is mounted on the bars of the rollercoaster; this implies that the background is always changing)\nHow can I detect this in the fastest time possible? It does not have to be real time detection but it does not have to be more than 0.5seconds. The whole image is 640x480. Now, since the hands can appear only in the top of the image, this would reduce the search area by half => 640x240. This would reduce to the problem of searching a certain object(the hands) in a constantly changing background.\nThank you,\n Stefan F.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3058,"Q_Id":7904055,"Users Score":1,"Answer":"You can try the very basic but so effective and fast solution:\non the upper half of the image:\n\ncanny edge detection\nmorphologyEx with adequate Structuring element(also simple combination of erode\/dilate may be enough)\nconvert to BW using adaptive threshold\nXor the result with a mask representing the expected covered area.\nThe number of ones returned by xor in each area of the mask is the index that you should use.\n\nThis is extremely fast, you can make more than one iteration within the 0.5 sec and use the average. also you may detect faces and use them to adapt the position of your mask, but this will be more expensive :)\nhope that helps","Q_Score":1,"Tags":"python,opencv,computer-vision,motion-detection","A_Id":9572786,"CreationDate":"2011-10-26T14:21:00.000","Title":"Detect Hand using OpenCV","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am faced with the following programming problem. I need to generate n (a, b) tuples for which the sum of all a's is a given A and sum of all b's is a given B and for each tuple the ratio of a \/ b is in the range (c_min, c_max). A \/ B is within the same range, too. I am also trying to make sure there is no bias in the result other than what is introduced by the constraints and the a \/ b values are more-or-less uniformly distributed in the given range.\nSome clarifications and meta-constraints:\n\nA, B, c_min, and c_max are given. \nThe ratio A \/ B is in the (c_min, c_max) range. This has to be so if the problem is to have a solution given the other constraints.\na and b are >0 and non-integer.\n\nI am trying to implement this in Python but ideas in any language (English included) are much appreciated.","AnswerCount":6,"Available Count":3,"Score":0.0333209931,"is_accepted":false,"ViewCount":2895,"Q_Id":7908800,"Users Score":1,"Answer":"Blocked Gibbs sampling is pretty simple and converges to the right distribution (this is along the lines of what Alexandre is proposing).\n\nFor all i, initialize ai = A \/ n and bi = B \/ n.\nSelect i \u2260 j uniformly at random. With probability 1\/2, update ai and aj with uniform random values satisfying the constraints. The rest of the time, do the same for bi and bj.\nRepeat Step 2 as many times as seems to be necessary for your application. I have no idea what the convergence rate is.","Q_Score":12,"Tags":"python,algorithm,random","A_Id":7917373,"CreationDate":"2011-10-26T20:52:00.000","Title":"Generating random numbers under very specific constraints","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am faced with the following programming problem. I need to generate n (a, b) tuples for which the sum of all a's is a given A and sum of all b's is a given B and for each tuple the ratio of a \/ b is in the range (c_min, c_max). A \/ B is within the same range, too. I am also trying to make sure there is no bias in the result other than what is introduced by the constraints and the a \/ b values are more-or-less uniformly distributed in the given range.\nSome clarifications and meta-constraints:\n\nA, B, c_min, and c_max are given. \nThe ratio A \/ B is in the (c_min, c_max) range. This has to be so if the problem is to have a solution given the other constraints.\na and b are >0 and non-integer.\n\nI am trying to implement this in Python but ideas in any language (English included) are much appreciated.","AnswerCount":6,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":2895,"Q_Id":7908800,"Users Score":2,"Answer":"Start by generating as many identical tuples, n, as you need:\n(A\/n, B\/n)\nNow pick two tuples at random. Make a random change to the a value of one, and a compensating change to the a value of the other, keeping everything within the given constraints. Put the two tuples back.\nNow pick another random pair. This times twiddle with the b values.\nLather, rinse repeat.","Q_Score":12,"Tags":"python,algorithm,random","A_Id":7908987,"CreationDate":"2011-10-26T20:52:00.000","Title":"Generating random numbers under very specific constraints","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am faced with the following programming problem. I need to generate n (a, b) tuples for which the sum of all a's is a given A and sum of all b's is a given B and for each tuple the ratio of a \/ b is in the range (c_min, c_max). A \/ B is within the same range, too. I am also trying to make sure there is no bias in the result other than what is introduced by the constraints and the a \/ b values are more-or-less uniformly distributed in the given range.\nSome clarifications and meta-constraints:\n\nA, B, c_min, and c_max are given. \nThe ratio A \/ B is in the (c_min, c_max) range. This has to be so if the problem is to have a solution given the other constraints.\na and b are >0 and non-integer.\n\nI am trying to implement this in Python but ideas in any language (English included) are much appreciated.","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":2895,"Q_Id":7908800,"Users Score":2,"Answer":"I think the simplest thing is to\n\nUse your favorite method to throw n-1 values such that \\sum_i=0,n-1 a_i < A, and set a_n to get the right total. There are several SO question about doing that, though I've never seen a answer I'm really happy with yet. Maybe I'll write a paper or something.\nGet the n-1 b's by throwing the c_i uniformly on the allowed range, and set final b to get the right total and check on the final c (I think it must be OK, but I haven't proven it yet).\n\nNote that since we have 2 hard constrains we should expect to throw 2n-2 random numbers, and this method does exactly that (on the assumption that you can do step 1 with n-1 throws.","Q_Score":12,"Tags":"python,algorithm,random","A_Id":7908989,"CreationDate":"2011-10-26T20:52:00.000","Title":"Generating random numbers under very specific constraints","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an .x3d code which references a python script. I am trying to implement certain functions which make use of the numpy module. However, I am only able to import the builtin modules from Python.\nI am looking for a way to import the numpy module into the script without having to call the interpreter (i.e. \"test.py\", instead of \"python test.py\").\nCurrently I get \"ImportError: No module named numpy\".\nMy question is: Is there a way to import the numpy module without having to call from the interpreter? Is there a way to include numpy as one of the built-in modules of Python?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1209,"Q_Id":7909761,"Users Score":4,"Answer":"find where numpy is installed on your system. For me, it's here:\n\/usr\/lib\/pymodules\/python2.7\nimport it explicitly before importing numpy\n\nimport sys\nsys.path.append('\/usr\/lib\/pymodules\/python2.7')\n... if you need help finding the correct path, check the contents of sys.path while using your python interpreter\nimport sys\nprint sys.path","Q_Score":3,"Tags":"python,numpy","A_Id":7909874,"CreationDate":"2011-10-26T22:36:00.000","Title":"Python - Run numpy without the python interpreter","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an .x3d code which references a python script. I am trying to implement certain functions which make use of the numpy module. However, I am only able to import the builtin modules from Python.\nI am looking for a way to import the numpy module into the script without having to call the interpreter (i.e. \"test.py\", instead of \"python test.py\").\nCurrently I get \"ImportError: No module named numpy\".\nMy question is: Is there a way to import the numpy module without having to call from the interpreter? Is there a way to include numpy as one of the built-in modules of Python?","AnswerCount":3,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":1209,"Q_Id":7909761,"Users Score":3,"Answer":"I'm going to guess that your #! line is pointing to a different python interpreter then the one you use normally. Make sure they point to the same one.","Q_Score":3,"Tags":"python,numpy","A_Id":7909895,"CreationDate":"2011-10-26T22:36:00.000","Title":"Python - Run numpy without the python interpreter","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an .x3d code which references a python script. I am trying to implement certain functions which make use of the numpy module. However, I am only able to import the builtin modules from Python.\nI am looking for a way to import the numpy module into the script without having to call the interpreter (i.e. \"test.py\", instead of \"python test.py\").\nCurrently I get \"ImportError: No module named numpy\".\nMy question is: Is there a way to import the numpy module without having to call from the interpreter? Is there a way to include numpy as one of the built-in modules of Python?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":1209,"Q_Id":7909761,"Users Score":1,"Answer":"Add the num.py libraries to sys.path before you call import","Q_Score":3,"Tags":"python,numpy","A_Id":7909774,"CreationDate":"2011-10-26T22:36:00.000","Title":"Python - Run numpy without the python interpreter","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I couldn't find the right function to add a footnote in my plot.\nThe footnote I want to have is something like an explanation of one item in the legend, but it is too long to put in the legend box. So, I'd like to add a ref number, e.g. [1], to the legend item, and add the footnote in the bottom of the plot, under the x-axis.\nWhich function should I use? Thanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42816,"Q_Id":7917107,"Users Score":17,"Answer":"One way would be just use plt.text(x,y,'text')","Q_Score":26,"Tags":"python,matplotlib","A_Id":7918549,"CreationDate":"2011-10-27T14:07:00.000","Title":"Add footnote under the x-axis using matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"For SciPy sparse matrix, one can use todense() or toarray() to transform to NumPy matrix or array. What are the functions to do the inverse?\nI searched, but got no idea what keywords should be the right hit.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":141998,"Q_Id":7922487,"Users Score":0,"Answer":"As for the inverse, the function is inv(A), but I won't recommend using it, since for huge matrices it is very computationally costly and unstable. Instead, you should use an approximation to the inverse, or if you want to solve Ax = b you don't really need A-1.","Q_Score":102,"Tags":"python,numpy,scipy,sparse-matrix","A_Id":33510117,"CreationDate":"2011-10-27T21:14:00.000","Title":"How to transform numpy.matrix or array to scipy sparse matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently downloaded OpenCV 2.3.1, compiled with the CMake flags withQt and withQtOpenGL turned on. My Qt version is 4.7.4 and is configured with OpenGL enabled. Supposedly I only need to copy cv2.pyd to Python's site-package path:\n\nC:\\Python27\\Lib\\site-packages\n\nAnd in the mean time make sure the OpenCV dlls are somewhere in my PATH. However, when I try to call\n\nimport cv2\n\nin ipython, it returned an error:\n\nImportError: DLL load failed: The specified procedure could not be found.\n\nI also tried OpenCV 2.3, resulting the same error. If OpenCV is compiled without Qt, the import works just fine. Has anyone run into similar problem before? Or is there anyway to get more information, such as which procedure is missing from what DLL?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":488,"Q_Id":7940848,"Users Score":0,"Answer":"Probably need the qt dll's in the same place as the opencv dlls - and they have to be the version built with the same compiler as opencv (and possibly python)","Q_Score":2,"Tags":"python,qt,opencv,import","A_Id":7940923,"CreationDate":"2011-10-29T18:33:00.000","Title":"Error when importing OpenCV python module (when built with Qt and QtOpenGL)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm in the process of working on programming project that involves some pretty extensive Monte Carlo simulation in Python, and as such the generation of a tremendous number of random numbers. Very nearly all of them, if not all of them, will be able to be generated by Python's built in random module.\nI'm something of a coding newbie, and unfamiliar with efficient and inefficient ways to do things. Is it faster to generate say, all the random numbers as a list, and then iterate through that list, or generate a new random number each time a function is called, which will be in a very large loop?\nOr some other, undoubtedly more clever method?","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":12900,"Q_Id":7988494,"Users Score":2,"Answer":"Code to generate 10M random numbers efficiently and faster:\n\nimport random\nl=10000000\nlistrandom=[]\nfor i in range (l):\n value=random.randint(0,l)\n listrandom.append(value)\nprint listrandom\n\nTime taken included the I\/O time lagged in printing on screen:\n\nreal 0m27.116s\nuser 0m24.391s\nsys 0m0.819s","Q_Score":14,"Tags":"python,random","A_Id":36474703,"CreationDate":"2011-11-02T23:14:00.000","Title":"Efficient way to generate and use millions of random numbers in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a difference between HDF5 files and files created by PyTables? \nPyTables has two functions .isHDFfile() and .isPyTablesFile() suggesting that there\nis a difference between the two formats.\nI've done some looking around on Google and have gathered that PyTables is built on top of HDF, but I wasn't able to find much beyond that. \nI am specifically interested in interoperability, speed and overhead.\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2253,"Q_Id":8002569,"Users Score":17,"Answer":"PyTables files are HDF5 files.\nHowever, as I understand it, PyTables adds some extra metadata to the attributes of each entry in the HDF file.\nIf you're looking for a more \"vanilla\" hdf5 solution for python\/numpy, have a look a h5py.\nIt's less database-like (i.e. less \"table-like\") than PyTables, and doesn't have as many nifty querying features, but it's much more straight-forward, in my opinion. If you're going to be accessing an hdf5 file from multiple different languages, h5py is probably a better route to take.","Q_Score":14,"Tags":"python,numpy,hdf5,pytables","A_Id":8002777,"CreationDate":"2011-11-03T22:13:00.000","Title":"Difference between HDF5 file and PyTables file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for an algorithm that is implemented in C, C++, Python or Java that calculates the set of winning coalitions for n agents where each agent has a different amount of votes. I would appreciate any hints. Thanks!","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":373,"Q_Id":8019172,"Users Score":3,"Answer":"In other words, you have an array X[1..n], and want to have all the subsets of it for which sum(subset) >= 1\/2 * sum(X), right?\nThat probably means the whole set qualifies. \nAfter that, you can drop any element k having X[k] < 1\/2 * sum(X), and every such a coalition will be fine as an answer, too.\nAfter that, you can proceed dropping elements one by one, stopping when you've reached half of the sum. \nThis is obviously not the most effective solution: you don't want to drop k1=1,k2=2 if you've already tried k1=2,k2=1\u2014but I believe you can handle this.","Q_Score":1,"Tags":"java,c++,python,c,algorithm","A_Id":8019217,"CreationDate":"2011-11-05T08:59:00.000","Title":"Coalition Search Algorithm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for an algorithm that is implemented in C, C++, Python or Java that calculates the set of winning coalitions for n agents where each agent has a different amount of votes. I would appreciate any hints. Thanks!","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":373,"Q_Id":8019172,"Users Score":0,"Answer":"Arrange the number of votes for each of the agents into an array, and compute the partial sums from the right, so that you can find out SUM_i = k to n Votes[i] just by looking up the partial sum.\nThen do a backtrack search over all possible subsets of {1, 2, ...n}. At any point in the backtrack you have accepted some subset of agents 0..i - 1, and you know from the partial sum the maximum possible number of votes available from other agents. So you can look to see if the current subset could be extended with agents number >= i to form a winning coalition, and discard it if not.\nThis gives you a backtrack search where you consider a subset only if it is already a winning coalition, or you will extend it to become a winning coalition. So I think the cost of the backtrack search is the sum of the sizes of the winning coalitions you discover, which seems close to optimal. I would be tempted to rearrange the agents before running this so that you deal with the agents with most votes first, but at the moment I don't see an argument that says you gain much from that.\nActually - taking a tip from Alf's answer - life is a lot easier if you start from the full set of agents, and then use backtrack search to decide which agents to discard. Then you don't need an array of partial sums, and you only generate subsets you want anyway. And yes, there is no need to order agents in advance.","Q_Score":1,"Tags":"java,c++,python,c,algorithm","A_Id":8019235,"CreationDate":"2011-11-05T08:59:00.000","Title":"Coalition Search Algorithm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a fairly complex function f(x) that I want to optimize and I am using the fmin_bfgs function from the scipy.optimize module from Scipy. It forces me to give the function to minimize and the function of the gradient f'(x) separately, which is a pity because some of the computations for the gradient can be done when evaluating the function f(x). \nIs there a way of combining both functions? I was considering saving the intermediate values required for both functions, but I don't know if the fmin_bfgs function guarantees that f(x) is evaluated before than f'(x).\nThank you","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":557,"Q_Id":8045576,"Users Score":4,"Answer":"The scipy.optimize.minimize methods has a parameter called \"jac\". If set to True, minimize will expect the callable f(x) to both return the function value and it's derivatives.","Q_Score":3,"Tags":"python,optimization,scipy","A_Id":15979342,"CreationDate":"2011-11-08T03:24:00.000","Title":"How to force the functions of the optimize module of scipy to take a function and its gradient simultaneously","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a function to calculate exponential moving sum in numpy or scipy. I want to avoid using python loops because they are really slow. \nto be specific, I have two series A[] and T[]. T[i] is the timestamp of value A[i]. I define a half-decay period tau. For a given time t, the exponential moving sum is the sum of all the values A[i] that happens before t, with weight exp(-(t-T[i])\/tau) for each A[i]. \nThanks a lot!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2259,"Q_Id":8052582,"Users Score":0,"Answer":"You can try to improve python loops by doing good \"practices\" (like avoiding dots).\nMaybe you can code you function in C (into a \"numpy library\") and call it from python.","Q_Score":5,"Tags":"python,numpy,scipy,vectorization","A_Id":8052660,"CreationDate":"2011-11-08T15:10:00.000","Title":"exponential moving sum in numpy \/ scipy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use OpenCV and Python to stitch together several hundred puzzle piece images into one large, complete image. All of the images are digitized and are in a PNG format. The pieces were originally from a scan and extracted into individual pieces, so they have transparent backgrounds and are each a single piece. What is the process of comparing them and finding their matches using OpenCV?\nThe plan is that the images and puzzle pieces will always be different and this python program will take a scan of all the pieces laid out, crop out the pieces (which it does now), and build the puzzle back.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1769,"Q_Id":8083263,"Users Score":0,"Answer":"If this is a small fun project that you are trying to do, you can compare image histograms or use SIFT\/SURF. I don't think there is implementation of SIFT, SURF in Python API. If you can find compatible equivalent, you can do it.\nComparing images are very much dependent on the data-set that you have. Some techniques work more better than the other.","Q_Score":2,"Tags":"python,image-processing,opencv,image-stitching","A_Id":11141963,"CreationDate":"2011-11-10T16:54:00.000","Title":"Using OpenCV and Python to stitch puzzle images together","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"is there a good algorithm for checking whether there are 5 same elements in a row or a column or diagonally given a square matrix, say 6x6?\nthere is ofcourse the naive algorithm of iterating through every spot and then for each point in the matrix, iterate through that row, col and then the diagonal. I am wondering if there is a better way of doing it.","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3808,"Q_Id":8091248,"Users Score":0,"Answer":"I don't think you can avoid iteration, but you can at least do an XOR of all elements and if the result of that is 0 => they are all equal, then you don't need to do any comparisons.","Q_Score":6,"Tags":"python,algorithm,matrix","A_Id":8091830,"CreationDate":"2011-11-11T08:08:00.000","Title":"better algorithm for checking 5 in a row\/col in a matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"is there a good algorithm for checking whether there are 5 same elements in a row or a column or diagonally given a square matrix, say 6x6?\nthere is ofcourse the naive algorithm of iterating through every spot and then for each point in the matrix, iterate through that row, col and then the diagonal. I am wondering if there is a better way of doing it.","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3808,"Q_Id":8091248,"Users Score":0,"Answer":"You can try improve your method with some heuristics: use the knowledge of the matrix size to exclude element sequences that do not fit and suspend unnecessary calculation. In case the given vector size is 6, you want to find 5 equal elements, and the first 3 elements are different, further calculation do not have any sense.\nThis approach can give you a significant advantage, if 5 equal elements in a row happen rarely enough.","Q_Score":6,"Tags":"python,algorithm,matrix","A_Id":8098697,"CreationDate":"2011-11-11T08:08:00.000","Title":"better algorithm for checking 5 in a row\/col in a matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"is there a good algorithm for checking whether there are 5 same elements in a row or a column or diagonally given a square matrix, say 6x6?\nthere is ofcourse the naive algorithm of iterating through every spot and then for each point in the matrix, iterate through that row, col and then the diagonal. I am wondering if there is a better way of doing it.","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3808,"Q_Id":8091248,"Users Score":0,"Answer":"Your best approach may depend on whether you control the placement of elements.\nFor example, if you were building a game and just placed the most recent element on the grid, you could capture into four strings the vertical, horizontal, and diagonal strips that intersected that point, and use the same algorithm on each strip, tallying each element and evaluating the totals. The algorithm may be slightly different depending on whether you're counting five contiguous elements out of the six, or allow gaps as long as the total is five.","Q_Score":6,"Tags":"python,algorithm,matrix","A_Id":8091403,"CreationDate":"2011-11-11T08:08:00.000","Title":"better algorithm for checking 5 in a row\/col in a matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been working with HDF5 files with C and Matlab, both using the same way for reading from and writing to datasets:\n\nopen file with h5f\nopen dataset with h5d\nselect space with h5s\n\nand so on...\nBut now I'm working with Python, and with its h5py library I see that it has two ways to manage HDF5: high-level and low-level interfaces. And with the former it takes less lines of code to get the information from a single variable of the file.\nIs there any noticeable loss of performance when using the high-level interface?\nFor example when dealing with a file with many variables inside, and we must read just one of them.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":932,"Q_Id":8096668,"Users Score":2,"Answer":"High-level interfaces are generally going with a performance loss of some sort. After that, whether it is noticeable (worth being investigated) will depend on what you are doing exactly with your code.\nJust start with the high-level interface. If the code is overall too slow, start profiling and move the bottlenecks down to the lower-level interface and see if it helps.","Q_Score":7,"Tags":"python,performance,hdf5,h5py","A_Id":8288377,"CreationDate":"2011-11-11T16:02:00.000","Title":"HDF5 for Python: high level vs low level interfaces. h5py","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would have a quite simple question, but can't find any suitable automated solution for now. \nI have developed an algorithm that performs a lot of stuff (image processing in fact) in Python. \nWhat I want to do now is to optimize it. And for that, I would love to create a graph of my algorithm.\nKind of an UML chart or sequencial chart in fact, in which functions would be displayed with inputs and ouptuts. \nMy algorithm does not imply complex stuff, and is mainly based on a = f(b) operations (no databases, hardware stuff, server, . . . )\nWould you have any hint? \nThanks by advance !","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":435,"Q_Id":8119900,"Users Score":1,"Answer":"UML generation is provided by pyreverse - it's part of pylint package\nIt generates UML in dot format - or png, etc.\nIt creates UML diagram, so you can easily see basic structure of your code\nI'm not sure if it satisfy all your needs, but it might be helpful","Q_Score":5,"Tags":"python,coding-style","A_Id":8121141,"CreationDate":"2011-11-14T10:01:00.000","Title":"Get the complete structure of a program?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm fed up with manually creating graphs in excel and consequently, I'm trying to automate the process using Python to massage the .csv data into a workable form and matplotlib to plot the result.\nUsing matplotlib and generating them is no problem but I can't work out is how to set the aspect ration \/ resolution of the output.\nSpecifically, I'm trying to generate scatter plots and stacked area graphs. Everything I've tried seems to result in one or more of the following:\n\nCramped graph areas (small plot area covered with the legend, axes etc.).\nThe wrong aspect ratio.\nLarge spaces on the sides of the chart area (I want a very wide \/ not very tall image).\n\nIf anyone has some working examples showing how to achieve this result I'd be very grateful!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1578,"Q_Id":8143439,"Users Score":0,"Answer":"For resolution, you can use the dpi (dots per inch) argument when creating a figure, or in the savefig() function. For high quality prints of graphics dpi=600 or more is recommended.","Q_Score":4,"Tags":"python,matplotlib","A_Id":8147354,"CreationDate":"2011-11-15T21:34:00.000","Title":"How do get matplotlib pyplot to generate a chart for viewing \/ output to .png at a specific resolution?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to simulate\/model a closed-loop, linear, time-invariant system (specifically a locked PLL approximation) with python. \nEach sub-block within the model has a known transfer function which is given in terms of complex frequency H(s) = K \/ ( s * tau + 1 ). Using the model, I would like to see how the system response as well as the noise response is affected as parameters (e.g. the VCO gain) are changed. This would involve using Bode plots and root-locus plots. \nWhat Python modules should I seek out to get the job done?","AnswerCount":5,"Available Count":1,"Score":0.1586485043,"is_accepted":false,"ViewCount":14035,"Q_Id":8144910,"Users Score":4,"Answer":"As @Matt said, I know this is old. But this came up as my first google hit, so I wanted to edit it.\nYou can use scipy.signal.lti to model linear, time invariant systems. That gives you lti.bode.\nFor an impulse response in the form of H(s) = (As^2 + Bs + C)\/(Ds^2 + Es + F), you would enter h = scipy.signal.lti([A,B,C],[D,E,F]). To get the bode plot, you would do plot(*h.bode()[:2]).","Q_Score":6,"Tags":"simulation,python,modeling","A_Id":16157331,"CreationDate":"2011-11-15T22:17:00.000","Title":"Modeling a linear system with Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a way to track two different colors at the same time using a single camera with OpenCV 2.3 (python bindings). \nI've read through a number of papers regarding OpenCV but can't find any mention as to whether or not it's capable of analyzing multiple histograms at once. \nIs this is even technically possible or do I need a separate camera for each color?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4911,"Q_Id":8152504,"Users Score":0,"Answer":"I don't really understand your concern. \nWith the camera, you would get an image object. \nWith this image object, you can calculate as much different histograms as you want. \nEach histogram would be a different output object :). \nBasicaly, you could track hundreds of colors at the same time!","Q_Score":0,"Tags":"python,opencv","A_Id":9634189,"CreationDate":"2011-11-16T13:34:00.000","Title":"Tracking two different colors using OpenCV 2.3 and Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm messing around with machine learning, and I've written a K Means algorithm implementation in Python. It takes a two dimensional data and organises them into clusters. Each data point also has a class value of either a 0 or a 1.\nWhat confuses me about the algorithm is how I can then use it to predict some values for another set of two dimensional data that doesn't have a 0 or a 1, but instead is unknown. For each cluster, should I average the points within it to either a 0 or a 1, and if an unknown point is closest to that cluster, then that unknown point takes on the averaged value? Or is there a smarter method?\nCheers!","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":14656,"Q_Id":8193563,"Users Score":1,"Answer":"If you are considering assigning a value based on the average value within the nearest cluster, you are talking about some form of \"soft decoder\", which estimates not only the correct value of the coordinate but your level of confidence in the estimate. The alternative would be a \"hard decoder\" where only values of 0 and 1 are legal (occur in the training data set), and the new coordinate would get the median of the values within the nearest cluster. My guess is that you should always assign only a known-valid class value (0 or 1) to each coordinate, and averaging class values is not a valid approach.","Q_Score":9,"Tags":"python,machine-learning,data-mining,k-means,prediction","A_Id":8194069,"CreationDate":"2011-11-19T10:58:00.000","Title":"Predicting Values with k-Means Clustering Algorithm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using matplotlib with interactive mode on and am performing a computation, say an optimization with many steps where I plot the intermediate results at each step for debugging purposes. These plots often fill the screen and overlap to a large extent.\nMy problem is that during the calculation, figures that are partially or fully occluded don't refresh when I click on them. They are just a blank grey. \nI would like to force a redraw if necessary when I click on a figure, otherwise it is not useful to display it. Currently, I insert pdb.set_trace()'s in the code so I can stop and click on all the figures to see what is going on\nIs there a way to force matplotlib to redraw a figure whenever it gains mouse focus or is resized, even while it is busy doing something else?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2049,"Q_Id":8225460,"Users Score":0,"Answer":"Have you tried to call plt.figure(fig.number) before plotting on figure fig and plt.show() after plotting a figure? It should update all the figures.","Q_Score":6,"Tags":"python,matplotlib","A_Id":8232587,"CreationDate":"2011-11-22T10:41:00.000","Title":"Getting matplotlib plots to refresh on mouse focus","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given: lst = [['John',3],['Blake',4],['Ted',3]]\nResult: lst = [['John',3],['Ted',3],['Blake',4]]\nI'm looking for a way to sort lists in lists first numerically then alphabetically without the use of the \"itemgetter\" syntax.","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":1274,"Q_Id":8236823,"Users Score":1,"Answer":"What's wrong with itemgetter?\nlst.sort(key=lambda l: list(reversed(l)) should do the trick","Q_Score":0,"Tags":"python,list","A_Id":8236857,"CreationDate":"2011-11-23T03:02:00.000","Title":"Python Sorting Lists in Lists","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a good (small and light) alternative to numpy for python, to do linear algebra?\nI only need matrices (multiplication, addition), inverses, transposes and such.\nWhy?\n\nI am tired of trying to install numpy\/scipy - it is such a pita to get\n it to work - it never seems to install correctly (esp. since I have\n two machines, one linux and one windows): no matter what I do: compile\n it or install from pre-built binaries. How hard is it to make a\n \"normal\" installer that just works?","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":10052,"Q_Id":8289157,"Users Score":0,"Answer":"I sometimes have this problem..not sure if this works but I often install it using my own account then try to run it in an IDE(komodo in my case) and it doesn't work. Like your issue it says it cannot find it. The way I solve this is to use sudo -i to get into root and then install it from there.\nIf that does not work can you update your answer to provide a bit more info about the type of system your using(linux, mac, windows), version of python\/numpy and how your accessing it so it'll be easier to help.","Q_Score":11,"Tags":"python","A_Id":8289322,"CreationDate":"2011-11-27T21:21:00.000","Title":"Alternative to scipy and numpy for linear algebra?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Anybody able to supply links, advice, or other forms of help to the following? \nObjective - use python to classify 10-second audio samples so that I afterwards can speak into a microphone and have python pick out and play snippets (faded together) of closest matches from db.\nMy objective is not to have the closest match and I don't care what the source of the audio samples is. So the result is probably of no use other than speaking in noise (fun).\nI would like the python app to be able to find a specific match of FFT for example within the 10 second samples in the db. I guess the real-time sampling of the microphone will have a 100 millisecond buffersample. \nAny ideas? FFT? What db? Other?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1085,"Q_Id":8312672,"Users Score":0,"Answer":"Try searching for algorithms on \"music fingerprinting\".","Q_Score":1,"Tags":"python,audio,classification,fft,pyaudioanalysis","A_Id":8314002,"CreationDate":"2011-11-29T14:43:00.000","Title":"python - audio classification of equal length samples \/ 'vocoder' thingy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm analyzing the AST generated by python code for \"fun and profit\", and I would like to have something more graphical than \"ast.dump\" to actually see the AST generated.\nIn theory is already a tree, so it shouldn't be too hard to create a graph, but I don't understand how I could do it.\nast.walk seems to walk with a BFS strategy, and the visitX methods I can't really see the parent or I don't seem to find a way to create a graph...\nIt seems like the only way is to write my own DFS walk function, is does it make sense?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3299,"Q_Id":8340567,"Users Score":6,"Answer":"If you look at ast.NodeVisitor, it's a fairly trivial class. You can either subclass it or just reimplement its walking strategy to whatever you need. For instance, keeping references to the parent when nodes are visited is very simple to implement this way, just add a visit method that also accepts the parent as an argument, and pass that from your own generic_visit.\nP.S. By the way, it appears that NodeVisitor.generic_visit implements DFS, so all you have to do is add the parent node passing.","Q_Score":11,"Tags":"python,grammar,abstract-syntax-tree","A_Id":8341353,"CreationDate":"2011-12-01T11:23:00.000","Title":"Python ast to dot graph","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just changed a program I am writing to hold my data as numpy arrays as I was having performance issues, and the difference was incredible. It originally took 30 minutes to run and now takes 2.5 seconds!\nI was wondering how it does it. I assume it is that the because it removes the need for for loops but beyond that I am stumped.","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47312,"Q_Id":8385602,"Users Score":0,"Answer":"Numpy arrays are extremily similar to 'normal' arrays such as those in c. Notice that every element has to be of the same type. The speedup is great because you can take advantage of prefetching and you can instantly access any element in array by it's index.","Q_Score":84,"Tags":"python,arrays,numpy","A_Id":8385718,"CreationDate":"2011-12-05T12:53:00.000","Title":"Why are NumPy arrays so fast?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using QItemSelectionModel with QTableView to allow the users to select rows. The problem is that when the user then clicks on a column header to sort the rows, the selection disappears and all the sorted data is displayed. How can I keep the selection, and just sort that, rather than having all the rows appear?\nThanks!\n--Erin","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":623,"Q_Id":8388659,"Users Score":0,"Answer":"Here is the way I ended up solving this problem:\n\nWhen row selections are made, put the unique IDs of each hidden row into a list, then hide all hidden rows\nUse self.connect(self.myHorizontalHeader, SIGNAL(\"sectionClicked(int)\"), self.keepSelectionValues) to catch the\nevent when a user clicks on a column header to sort the rows\nIn self.keepSelectionValue, go through each row and if the unique ID is in the hidden row list, hide the row\n\nThis effectively sorts and displays the non-hidden rows without displaying all the rows of the entire table.","Q_Score":0,"Tags":"python,pyqt","A_Id":8539266,"CreationDate":"2011-12-05T16:41:00.000","Title":"How can I keep row selections in QItemSelectionModel when columns are sorted?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted the imshow() function in matplotlib.pyplot to display images the opposite way, i.e upside down. Is there a simple way to do this?","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":82762,"Q_Id":8396101,"Users Score":221,"Answer":"Specify the keyword argument origin='lower' or origin='upper' in your call to imshow.","Q_Score":115,"Tags":"python,image,matplotlib","A_Id":8396124,"CreationDate":"2011-12-06T06:20:00.000","Title":"Invert image displayed by imshow in matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted the imshow() function in matplotlib.pyplot to display images the opposite way, i.e upside down. Is there a simple way to do this?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":82762,"Q_Id":8396101,"Users Score":1,"Answer":"You can use the extent argument. For example, if X values range from -10 and 10 and Y values range from -5 to 5, you should pass extent=(-10,10,-5,5) to imshow().","Q_Score":115,"Tags":"python,image,matplotlib","A_Id":67577958,"CreationDate":"2011-12-06T06:20:00.000","Title":"Invert image displayed by imshow in matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted the imshow() function in matplotlib.pyplot to display images the opposite way, i.e upside down. Is there a simple way to do this?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":82762,"Q_Id":8396101,"Users Score":0,"Answer":"Use ax.invert_yaxis() to invert the y-axis, or ax.invert_xaxis() to invert the x-axis.","Q_Score":115,"Tags":"python,image,matplotlib","A_Id":68682366,"CreationDate":"2011-12-06T06:20:00.000","Title":"Invert image displayed by imshow in matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python to work on my project in image processing. Suppose I got a very large image ( 100000 x 100000), and I need to randomly select a 200 x 200 square from this large image. Are there any easy way to do this job? Please share some light with me. Thank you\n----------------------------- EDIT ------------------------------------\nSorry I don't think it is 100000 x 100000, but the resolution of images are in 1 km and 2km. I am having trouble with selecting the area of 200 x 200.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":465,"Q_Id":8410260,"Users Score":4,"Answer":"If you convert to binary PPM format, then there should be an easy way to seek to the appropriate offsets - it's not compressed, so there should be simple relationships.\nSo pick two random numbers between 0 and 100000-200-1, and go to town.\n(I'm assuming you don't have a system with 10's of gigabytes of RAM)","Q_Score":2,"Tags":"python,image-processing,matplotlib,python-imaging-library","A_Id":8410307,"CreationDate":"2011-12-07T04:06:00.000","Title":"randomly select a 200x200 square inside an image in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using csv.DictReader to read some large files into memory to then do some analysis, so all objects from multiple CSV files need to be kept in memory. I need to read them as Dictionary to make analysis easier, and because the CSV files may be altered by adding new columns.\nYes SQL can be used, but I'd rather avoid it if it's not needed.\nI'm wondering if there is a better and easier way of doing this. My concern is that I will have many dictionary objects with same keys and waste memory? The use of __slots__ was an option, but I will only know the attributes of an object after reading the CSV. \n[Edit:] Due to being on legacy system and \"restrictions\", use of third party libraries is not possible.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":490,"Q_Id":8411476,"Users Score":0,"Answer":"Possibilities:\n(1) Benchmark the csv.DictReader approach and see if it causes a problem. Note that the dicts contain POINTERS to the keys and values; the actual key strings are not copied into each dict.\n(2) For each file, use csv.Reader, after the first row, build a class dynamically, instantiate it once per remaining row. Perhaps this is what you had in mind. \n(3) Have one fixed class, instantiated once per file, which gives you a list of tuples for the actual data, a tuple that maps column indices to column names, and a dict that maps column names to column indices. Tuples occupy less memory than lists because there is no extra append-space allocated. You can then get and set your data via (row_index, column_index) and (row_index, column_name).\nIn any case, to get better advice, how about some simple facts and stats: What version of Python? How many files? rows per file? columns per file? total unique keys\/column names?","Q_Score":1,"Tags":"python,memory,dictionary","A_Id":8414694,"CreationDate":"2011-12-07T06:51:00.000","Title":"Python: large number of dict like objects memory use","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using csv.DictReader to read some large files into memory to then do some analysis, so all objects from multiple CSV files need to be kept in memory. I need to read them as Dictionary to make analysis easier, and because the CSV files may be altered by adding new columns.\nYes SQL can be used, but I'd rather avoid it if it's not needed.\nI'm wondering if there is a better and easier way of doing this. My concern is that I will have many dictionary objects with same keys and waste memory? The use of __slots__ was an option, but I will only know the attributes of an object after reading the CSV. \n[Edit:] Due to being on legacy system and \"restrictions\", use of third party libraries is not possible.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":490,"Q_Id":8411476,"Users Score":0,"Answer":"If all the data in one column are the same type, you can use NumPy. NumPy's loadtxt and genfromtxt function can be used to read csv file. And because it returns an array, the memory usage is smaller then dict.","Q_Score":1,"Tags":"python,memory,dictionary","A_Id":8411784,"CreationDate":"2011-12-07T06:51:00.000","Title":"Python: large number of dict like objects memory use","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large matrix (approx. 80,000 X 60,000), and I basically want to scramble all the entries (that is, randomly permute both rows and columns independently). \nI believe it'll work if I loop over the columns, and use randperm to randomly permute each column. (Or, I could equally well do rows.) Since this involves a loop with 60K iterations, I'm wondering if anyone can suggest a more efficient option? \nI've also been working with numpy\/scipy, so if you know of a good option in python, that would be great as well. \nThanks!\nSusan\nThanks for all the thoughtful answers! Some more info: the rows of the matrix represent documents, and the data in each row is a vector of tf-idf weights for that document. Each column corresponds to one term in the vocabulary. I'm using pdist to calculate cosine similarities between all pairs of papers. And I want to generate a random set of papers to compare to. \nI think that just permuting the columns will work, then, because each paper gets assigned a random set of term frequencies. (Permuting the rows just means reordering the papers.) As Jonathan pointed out, this has the advantage of not making a new copy of the whole matrix, and it sounds like the other options all will.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2124,"Q_Id":8449501,"Users Score":0,"Answer":"Both solutions above are great, and will work, but I believe both will involve making a completely new copy of the entire matrix in memory while doing the work. Since this is a huge matrix, that's pretty painful. In the case of the MATLAB solution, I think you'll be possibly creating two extra temporary copies, depending on how reshape works internally. I think you were on the right track by operating on columns, but the problem is that it will only scramble along columns. However, I believe if you do randperm along rows after that, you'll end up with a fully permuted matrix. This way you'll only be creating temporary variables that are, at worst, 80,000 by 1. Yes, that's two loops with 60,000 and 80,000 iterations each, but internally that's going to have to happen regardless. The algorithm is going to have to visit each memory location at least twice. You could probably do a more efficient algorithm by writing a C MEX function that operates completely in place, but I assume you'd rather not do that.","Q_Score":0,"Tags":"python,matlab,random,permutation","A_Id":8450596,"CreationDate":"2011-12-09T17:39:00.000","Title":"matlab: randomly permuting rows and columns of a 2-D array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given any n x n matrix of real coefficients A, we can define a bilinear form bA : Rn x Rn \u2192 R by\n\nbA(x, y) = xTAy ,\n\nand a quadratic form qA : Rn \u2192 R by\n\nqA(x) = bA(x, x) = xTAx .\n\n(For most common applications of quadratic forms qA, the matrix A is symmetric, or even symmetric positive definite, so feel free to assume that either one of these is the case, if it matters for your answer.)\n(Also, FWIW, bI and qI (where I is the n x n identity matrix) are, respectively, the standard inner product, and squared L2-norm on Rn, i.e. xTy and xTx.)\nNow suppose I have two n x m matrices, X and Y, and an n x n matrix A. I would like to optimize the computation of both bA(x,i, y,i) and qA(x,i) (where x,i and y,i denote the i-th column of X and Y, respectively), and I surmise that, at least in some environments like numpy, R, or Matlab, this will involve some form of vectorization.\nThe only solution I can think of requires generating diagonal block matrices [X], [Y] and [A], with dimensions mn x m, mn x m, and mn x mn, respectively, and with (block) diagonal elements x,i, y,i, and A, respectively. Then the desired computations would be the matrix multiplications [X]T[A][Y] and [X]T[A][X]. This strategy is most definitely uninspired, but if there is a way to do it that is efficient in terms of both time and space, I'd like to see it. (It goes without saying that any implementation of it that does not exploit the sparsity of these block matrices would be doomed.)\nIs there a better approach?\nMy preference of system for doing this is numpy, but answers in terms of some other system that supports efficient matrix computations, such as R or Matlab, may be OK too (assuming that I can figure out how to port them to numpy).\nThanks!\n\nOf course, computing the products XTAY and XTAX would compute the desired bA(x,i, y,i) and qA(x,i) (as the diagonal elements of the resulting m x m matrices), along with the O(m2) irrelevant bA(x,i, y,j) and bA(x,i, x,j), (for i \u2260 j), so this is a non-starter.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":3364,"Q_Id":8457110,"Users Score":1,"Answer":"It's not entirely clear what you're trying to achieve, but in R, you use crossprod to form cross-products: given matrices X and Y with compatible dimensions, crossprod(X, Y) returns XTY. Similarly, matrix multiplication is achieved with the %*% operator: X %*% Y returns the product XY. So you can get XTAY as crossprod(X, A %*% Y) without having to worry about the mechanics of matrix multiplication, loops, or whatever.\nIf your matrices have a particular structure that allows optimising the computations (symmetric, triangular, sparse, banded, ...), you could look at the Matrix package, which has some support for this.\nI haven't used Matlab, but I'm sure it would have similar functions for these operations.","Q_Score":6,"Tags":"python,r,matlab,matrix,numpy","A_Id":8458779,"CreationDate":"2011-12-10T14:15:00.000","Title":"How to vectorize the evaluation of bilinear & quadratic forms?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say I have a list in the form [[x,y,z], [x,y,z] etc...] etc where each grouping represents a random point.\nI want to order my points by the z coordinate, then within each grouping of z's, sort them by x coordinate. Is this possible?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3538,"Q_Id":8522800,"Users Score":0,"Answer":"If you go further down the page Mr. White linked to, you'll see how you can specify an arbitrary function to compute your sort key (using the handy cmp_to_key function provided).","Q_Score":3,"Tags":"python,list,sorting","A_Id":8523192,"CreationDate":"2011-12-15T15:51:00.000","Title":"Sorting a Python list by third element, then by first element, etc?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pybrain NN up and running, and it seems to be working rather well. Ideally, I would like to train the network and obtain a prediction after each data point (the previous weeks figures, in this case) has been added to the dataset.\nAt the moment I'm doing this by rebuilding the network each time, but it takes an increasingly long time to train the network as each example is added (+2 minutes for each example, in a dataset of 1000s of examples).\nIs there a way to speed up the process by adding the new example to an already trained NN and updating it, or am I overcomplicating the matter, and would be better served by training on a single set of examples (say last years data) and then testing on all of the new examples (this year)?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":985,"Q_Id":8582498,"Users Score":1,"Answer":"It dependes of what is your objective. If you need an updated NN-model you can perform an online training, i.e. performing a single step of back-propagation with the sample acquired at time $t$ starting from the network you had at time $t-1$. Or maybe you can discard the older samples in order to have a fixed amount of training samples or you can reduce the size of the training set performing a sort of clustering (i.e. merging similar samples into a single one). \nIf you explain better your application it'd be simpler suggesting solutions.","Q_Score":3,"Tags":"python,neural-network,pybrain","A_Id":8590831,"CreationDate":"2011-12-20T21:57:00.000","Title":"Retrain a pybrain neural network after adding to the dataset","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I read somewhere that the python library function random.expovariate produces intervals equivalent to Poisson Process events.\nIs that really the case or should I impose some other function on the results?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":12528,"Q_Id":8592048,"Users Score":8,"Answer":"On a strict reading of your question, yes, that is what random.expovariate does. \nexpovariate gives you random floating point numbers, exponentially distributed. In a Poisson process the size of the interval between consecutive events is exponential. \nHowever, there are two other ways I could imagine modelling poisson processes\n\nJust generate random numbers, uniformly distributed and sort them.\nGenerate integers which have a Poisson distribution (i.e. they are distributed like the number of events within a fixed interval in a Poisson process). Use numpy.random.poisson to do this.\n\nOf course all three things are quite different. The right choice depends on your application.","Q_Score":5,"Tags":"python,math,statistics,poisson","A_Id":8592302,"CreationDate":"2011-12-21T15:15:00.000","Title":"Is random.expovariate equivalent to a Poisson Process","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working on a system for robust hand detection.\nThe first step is to take a photo of the hand (in HSV color space) with the hand placed in a small rectangle to determine the skin color. I then apply a thresholding filter to set all non-skin pixels to black and all skin pixels white. \nSo far it works quite well, but I wanted to ask if there is a better way to solve this? For example, I found a few papers mentioning concrete color spaces for caucasian people, but none with a comparison for asian\/african\/caucasian color-tones.\nBy the way, I'm working with OpenCV via Python bindings.","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":11923,"Q_Id":8593091,"Users Score":0,"Answer":"Well my experience with the skin modeling are bad, because:\n1) lightning can vary - skin segmentation is not robust\n2) it will mark your face also (as other skin-like objects)\nI would use machine learning techniques like Haar training, which, in my opinion, if far more better approach than modeling and fixing some constraints (like skin detection + thresholding...)","Q_Score":22,"Tags":"python,image-processing,opencv,computer-vision,skin","A_Id":25213056,"CreationDate":"2011-12-21T16:30:00.000","Title":"Robust Hand Detection via Computer Vision","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am doing research for my new project, Following is the details of my project, research and questions:\nProject:\n\nSave the Logs (ex. format is TimeStamp,LOG Entry,Location,Remarks etc ) from different sources. Here Different sources is like, gettting the LOG data from the different systems world wide (Just an Overview)\n(After saving the LOG Entries in Hadoop as specified in 1) Generate Reports of the LOGs saved in Hadoop on demand like drill down, drill up etc\n\nNOTE: For every minute approx. thier will be 50 to 60 MB of LOG Entries from the systems (I checked it).\nResearch and Questions:\n\nFor saving log entries in the Hadoop from different sources, we used Apache Flume.\nWe are creating our own MR programs and servlets.\n\nIs thier any good options other than flume?\nIs thier any Hadoop Data Analysis (Open Source) tool to genarte reports etc?\nI am doing my research, if any of us add some comments to me it will be helpfull.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":321,"Q_Id":8633112,"Users Score":0,"Answer":"I think you can use HIVE. Even I am new to Hadoop but read some where that HIVE is for hadoop analytics. Not sure whether it has GUI or not, but for sure it has SQL capability to query unstructed data.","Q_Score":0,"Tags":"java,python,hadoop","A_Id":8633193,"CreationDate":"2011-12-26T05:17:00.000","Title":"Hadoop - Saving Log Data and Developing GUI","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am doing research for my new project, Following is the details of my project, research and questions:\nProject:\n\nSave the Logs (ex. format is TimeStamp,LOG Entry,Location,Remarks etc ) from different sources. Here Different sources is like, gettting the LOG data from the different systems world wide (Just an Overview)\n(After saving the LOG Entries in Hadoop as specified in 1) Generate Reports of the LOGs saved in Hadoop on demand like drill down, drill up etc\n\nNOTE: For every minute approx. thier will be 50 to 60 MB of LOG Entries from the systems (I checked it).\nResearch and Questions:\n\nFor saving log entries in the Hadoop from different sources, we used Apache Flume.\nWe are creating our own MR programs and servlets.\n\nIs thier any good options other than flume?\nIs thier any Hadoop Data Analysis (Open Source) tool to genarte reports etc?\nI am doing my research, if any of us add some comments to me it will be helpfull.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":321,"Q_Id":8633112,"Users Score":1,"Answer":"Have you looked at Datameer ? It provides a GUI to import all these types of files, and create reports as well as dashboards.","Q_Score":0,"Tags":"java,python,hadoop","A_Id":8633276,"CreationDate":"2011-12-26T05:17:00.000","Title":"Hadoop - Saving Log Data and Developing GUI","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to generate a random text using letter frequencies from a book in a .txt file, so that each new character (string.lowercase + ' ') depends on the previous one.\nHow do I use Markov chains to do so? Or is it simpler to use 27 arrays with conditional frequencies for each letter?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1880,"Q_Id":8660015,"Users Score":1,"Answer":"If each character only depends on the previous character, you could just compute the probabilities for all 27^2 pairs of characters.","Q_Score":4,"Tags":"python,markov-chains","A_Id":8660103,"CreationDate":"2011-12-28T18:53:00.000","Title":"Markov chain on letter scale and random text","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written a canny edge detection algorithm for a project. I want to know is there any method to link the broken segments of an edge, since i am getting a single edge as a conglomeration of a few segments. I am getting around 100 segments, which i am sure can be decreased with some intelligence. Please help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":433,"Q_Id":8699665,"Users Score":0,"Answer":"You can use a method named dynamic programming. A very good intro on this can be found on chapter 6 of Sonka's digital image processing book","Q_Score":1,"Tags":"python,image-processing,edge-detection","A_Id":8724586,"CreationDate":"2012-01-02T10:19:00.000","Title":"Linking segments of edges","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating a script to convert a whole lot of data into CSV format. It runs on Google AppEngine using the mapreduce API, which is only relevant in that it means each row of data is formatted and output separately, in a callback function.\nI want to take advantage of the logic that already exists in the csv module to convert my data into the correct format, but because the CSV writer expects a file-like object, I'm having to instantiate a StringIO for each row, write the row to the object, then return the content of the object, each time.\nThis seems silly, and I'm wondering if there is any way to access the internal CSV formatting logic of the csv module without the writing part.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":464,"Q_Id":8711147,"Users Score":3,"Answer":"The csv module wraps the _csv module, which is written in C. You could grab the source for it and modify it to not require the file-like object, but poking around in the module, I don't see any clear way to do it without recompiling.","Q_Score":0,"Tags":"python,csv","A_Id":8711375,"CreationDate":"2012-01-03T10:52:00.000","Title":"Formatting a single row as CSV","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to a parallelize an application using multiprocessing which takes in\na very large csv file (64MB to 500MB), does some work line by line, and then outputs a small, fixed size\nfile. \nCurrently I do a list(file_obj), which unfortunately is loaded entirely\ninto memory (I think) and I then I break that list up into n parts, n being the\nnumber of processes I want to run. I then do a pool.map() on the broken up\nlists. \nThis seems to have a really, really bad runtime in comparison to a single\nthreaded, just-open-the-file-and-iterate-over-it methodology. Can someone\nsuggest a better solution?\nAdditionally, I need to process the rows of the file in groups which preserve\nthe value of a certain column. These groups of rows can themselves be split up,\nbut no group should contain more than one value for this column.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":14016,"Q_Id":8717179,"Users Score":2,"Answer":"I would keep it simple. Have a single program open the file and read it line by line. You can choose how many files to split it into, open that many output files, and every line write to the next file. This will split the file into n equal parts. You can then run a Python program against each of the files in parallel.","Q_Score":18,"Tags":"python,parallel-processing","A_Id":8717213,"CreationDate":"2012-01-03T18:52:00.000","Title":"Chunking data from a large file for multiprocessing?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a two dimensional table (Matrix)\nI need to process each line in this matrix independently from the others.\nThe process of each line is time consuming.\nI'd like to use parallel computing resources in our university (Canadian Grid something)\nCan I have some advise on how to start ? I never used parallel computing before.\nThanks :)","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1155,"Q_Id":8736396,"Users Score":0,"Answer":"Like the commentators have said, find someone to talk to in your university. The answer to your question will be specific to what software is installed on the grid. If you have access to a grid, it's highly likely you also have access to a person whose job it is to answer your questions (and they will be pleased to help) - find this person!","Q_Score":2,"Tags":"python","A_Id":8741894,"CreationDate":"2012-01-05T01:13:00.000","Title":"Parallel computing","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What's the (best) way to solve a pair of non linear equations using Python. (Numpy, Scipy or Sympy)\neg:\n\n\nx+y^2 = 4\ne^x+ xy = 3\n\n\nA code snippet which solves the above pair will be great","AnswerCount":9,"Available Count":1,"Score":0.0444152037,"is_accepted":false,"ViewCount":135871,"Q_Id":8739227,"Users Score":2,"Answer":"You can use openopt package and its NLP method. It has many dynamic programming algorithms to solve nonlinear algebraic equations consisting: \ngoldenSection, scipy_fminbound, scipy_bfgs, scipy_cg, scipy_ncg, amsg2p, scipy_lbfgsb, scipy_tnc, bobyqa, ralg, ipopt, scipy_slsqp, scipy_cobyla, lincher, algencan, which you can choose from. \nSome of the latter algorithms can solve constrained nonlinear programming problem.\nSo, you can introduce your system of equations to openopt.NLP() with a function like this: \nlambda x: x[0] + x[1]**2 - 4, np.exp(x[0]) + x[0]*x[1]","Q_Score":82,"Tags":"python,numpy,scipy,sympy","A_Id":21651546,"CreationDate":"2012-01-05T07:49:00.000","Title":"How to solve a pair of nonlinear equations using Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example in CQL, \nSELECT * from abc_dimension ORDER BY key ASC;\nseems to be not working.\nAny help?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3328,"Q_Id":8751293,"Users Score":0,"Answer":"Latest versions of Cassandra support aggregations within single partition only.","Q_Score":2,"Tags":"python,cassandra,cql","A_Id":43361173,"CreationDate":"2012-01-05T23:13:00.000","Title":"does cassandra cql support aggregation functions, like group by and order by","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've currently got a project running on PiCloud that involves multiple iterations of an ODE Solver. Each iteration produces a NumPy array of about 30 rows and 1500 columns, with each iterations being appended to the bottom of the array of the previous results.\nNormally, I'd just let these fairly big arrays be returned by the function, hold them in memory and deal with them all at one. Except PiCloud has a fairly restrictive cap on the size of the data that can be out and out returned by a function, to keep down on transmission costs. Which is fine, except that means I'd have to launch thousands of jobs, each running on iteration, with considerable overhead.\nIt appears the best solution to this is to write the output to a file, and then collect the file using another function they have that doesn't have a transfer limit.\nIs my best bet to do this just dumping it into a CSV file? Should I add to the CSV file each iteration, or hold it all in an array until the end and then just write once? Is there something terribly clever I'm missing?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":8395,"Q_Id":8775786,"Users Score":2,"Answer":"I would recommend looking at the pickle module. The pickle module allows you to serialize python objects as streams of bytes (e.g., strings). This allows you to write them to a file or send them over a network, and then reinstantiate the objects later.","Q_Score":6,"Tags":"python,numpy,scientific-computing","A_Id":8775931,"CreationDate":"2012-01-08T06:12:00.000","Title":"Efficient ways to write a large NumPy array to a file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do people usually extract the shape of the lips once the mouth region is found (in my case using haar cascade)? I tried color segmentation and edge\/corner detection but they're very inaccurate for me. I need to find the two corners and the very upper and lower lip at the center. I've heard things about active appearance models but I'm having trouble understanding how to use this with python and I don't have enough context to figure out if this is even the conventional method for detecting different parts of the lips. Is that my best choice or do I have other options? If I should use it, how would I get started with it using python and simplecv?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4301,"Q_Id":8840127,"Users Score":0,"Answer":"The color segementation involves \"gradient of the difference between the pseudo-hue and luminance (obtaining hybrid contours)\". Try googling for qouted string and you will find multiple research papers on this topic.","Q_Score":5,"Tags":"python,opencv,face-detection,simplecv","A_Id":33917789,"CreationDate":"2012-01-12T18:15:00.000","Title":"OpenCV Lip Segmentation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of products that I am trying to classify into categories. They will be described with incomplete sentences like:\n\"Solid State Drive Housing\"\n\"Hard Drive Cable\"\n\"1TB Hard Drive\"\n\"500GB Hard Drive, Refurbished from Manufacturer\"\nHow can I use python and NLP to get an output like \"Housing, Cable, Drive, Drive\", or a tree that describes which word is modifying which?\nThank you in advance","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2952,"Q_Id":8841569,"Users Score":2,"Answer":"NLP techniques are relatively ill equipped to deal with this kind of text.\nPhrased differently: it is quite possible to build a solution which includes NLP processes to implement the desired classifier but the added complexity doesn't necessarily pays off in term of speed of development nor classifier precision improvements.\nIf one really insists on using NLP techniques, POS-tagging and its ability to identify nouns is the most obvious idea, but Chunking and access to WordNet or other lexical sources are other plausible uses of NLTK.\nInstead, an ad-hoc solution based on simple regular expressions and a few heuristics such as these suggested by NoBugs is probably an appropriate approach to the problem. Certainly, such solutions bear two main risks:\n\nover-fitting to the portion of the text reviewed\/considered in building the rules\npossible messiness\/complexity of the solution if too many rules and sub-rules are introduced.\n\nRunning some plain statical analysis on the complete (or very big sample) of the texts to be considered should help guide the selection of a few heuristics and also avoid the over-fitting concerns. I'm quite sure that a relatively small number of rules, associated with a custom dictionary should be sufficient to produce a classifier with appropriate precision as well as speed\/resources performance.\nA few ideas:\n\ncount all the words (and possibly all the bi-grams and tri-grams) in a sizable portion of the corpus a hand. This info can drive the design of the classifier by allowing to allocate the most effort and the most rigid rules to the most common patterns. \nmanually introduce a short dictionary which associates the most popular words with:\n\ntheir POS function (mostly a binary matter here: i.e. nouns vs. modifiers and other non-nouns.\ntheir synonym root [if applicable]\ntheir class [if applicable]\n\nIf the pattern holds for most of the input text, consider using the last word before the end of text or before the first comma as the main key to class selection.\nIf the pattern doesn't hold, just give more weight to the first and to the last word.\nconsider a first pass where the text is re-written with the most common bi-grams replaced by a single word (even an artificial code word) which would be in the dictionary\nconsider also replacing the most common typos or synonyms with their corresponding synonym root. Adding regularity in the input helps improve precision and also help making a few rules \/ a few entries in the dictionary have a big return on precision.\nfor words not found in dictionary, assume that words which are mixed with numbers and\/or preceded by numbers are modifiers, not nouns. Assume that the \nconsider a two-tiers classification whereby inputs which cannot be plausibly assigned a class are put in the \"manual pile\" to prompt additional review which results in additional of rules and\/or dictionary entries. After a few iterations the classifier should require less and less improvements and tweaks.\nlook for non-obvious features. For example some corpora are made from a mix of sources but some of the sources, may include particular regularities which help identify the source and\/or be applicable as classification hints. For example some sources may only contains say uppercase text (or text typically longer than 50 characters, or truncated words at the end etc.)\n\nI'm afraid this answer falls short of providing Python\/NLTK snippets as a primer towards a solution, but frankly such simple NLTK-based approaches are likely to be disappointing at best. Also, we should have a much bigger sample set of the input text to guide the selection of plausible approaches, include ones that are based on NLTK or NLP techniques at large.","Q_Score":5,"Tags":"python,nlp,nltk","A_Id":8860314,"CreationDate":"2012-01-12T20:08:00.000","Title":"Find subject in incomplete sentence with NLTK","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a text file containing tabular data. What I need to do is automate the task of writing to a new text file that is comma delimited instead of space delimited, extract a few columns from existing data, reorder the columns. \nThis is a snippet of the first 4 lines of the original data:\n\nNumber of rows: 8542\n Algorithm |Date |Time |Longitude |Latitude |Country \n 1 2000-01-03 215926.688 -0.262 35.813 Algeria \n 1 2000-01-03 215926.828 -0.284 35.817 Algeria\n\nHere is what I want in the end:\n\nLongitude,Latitude,Country,Date,Time\n-0.262,35.813,Algeria,2000-01-03,215926.688\n\nAny tips on how to approach this?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18146,"Q_Id":8858946,"Users Score":0,"Answer":"str.split() without any arguments will split by any length of whitespace. operator.itemgetter() takes multiple arguments, and will return a tuple.","Q_Score":5,"Tags":"python","A_Id":8858983,"CreationDate":"2012-01-14T00:11:00.000","Title":"converting a space delimited file to a CSV","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a netcdf file that I would like to convert to an image (joed, png, gif) using a command line tool.\nIs someone could please help me with the library name and possibly a link to how it is done.\nRegards\nDavid","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":8951,"Q_Id":8864599,"Users Score":0,"Answer":"IDV is a good visualization tool for NetCDF, but, as far as I know, there is no command line interface.\nI would recommend Matlab. It has read and write functions for NetCDF as well as an extensive plotting library...probably one of the best. You can then compile the matlab code and run it from the command line.","Q_Score":6,"Tags":"python,netcdf","A_Id":9473690,"CreationDate":"2012-01-14T19:03:00.000","Title":"Convert netcdf to image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Whenever I run the solver 'interalg' (in the SNLE function call from OpenOpt) in a loop my memory usage accumulates until the code stops running.\nIt happen both in my Mac Os X 10.6.8 and in Slackware Linux.\nI would really appreciate some advice, considering that I am not extremely literate in python.\nThank you!\nDaniel","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":143,"Q_Id":8918773,"Users Score":2,"Answer":"Yes, there is clearly a memory leak here. I ran the nlsp demo, that uses SNLE with interalg, using valgrind and found that 295k has been leaked from running the solver once. This should be reported to them.","Q_Score":2,"Tags":"python,memory-leaks,numpy,scipy","A_Id":8959115,"CreationDate":"2012-01-18T22:59:00.000","Title":"'Memory leak' when calling openopt SNLE in a loop","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a wxPython program which reads from different datasets, performs various types of simple on-the-fly analysis on the data and plots various combinations of the datasets to matplotlib canvas. I would like to have the opportunity to dump currently plotted data to file for more sophisticated analysis later on. \nThe question is: are there any methods in matplotlib that allow access to the data currently plotted in matplotlib.Figure?","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":31312,"Q_Id":8938449,"Users Score":2,"Answer":"Its Python, so you can modify the source script directly so the data is dumped before it is plotted","Q_Score":25,"Tags":"python,matplotlib","A_Id":8938840,"CreationDate":"2012-01-20T08:15:00.000","Title":"How to extract data from matplotlib plot","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know how to implement least-squares with elementary matrix decomposition and other operations, but how can I do it in Python? (I've never tried to use matrices in Python)\n\n(clarification edit to satisfy the shoot-first-and-ask-questions-later -1'er)\nI was looking for help to find out how to use numerical programming in Python. Looks like numpy and scipy are the way to go. I was looking for how to use them, but I found a tutorial.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":819,"Q_Id":8953991,"Users Score":2,"Answer":"scipy and numpy is the obvious way to go here.\nNote that numpy uses the famous (and well-optimized) BLAS libraries, so it is also very fast. Much faster than any \"pure python\" will ever be.","Q_Score":0,"Tags":"python,matrix,numerical-methods","A_Id":8954018,"CreationDate":"2012-01-21T15:03:00.000","Title":"python: least-squares estimation?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know how to implement least-squares with elementary matrix decomposition and other operations, but how can I do it in Python? (I've never tried to use matrices in Python)\n\n(clarification edit to satisfy the shoot-first-and-ask-questions-later -1'er)\nI was looking for help to find out how to use numerical programming in Python. Looks like numpy and scipy are the way to go. I was looking for how to use them, but I found a tutorial.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":819,"Q_Id":8953991,"Users Score":1,"Answer":"Have a look at SciPy. It's got matrix operations.","Q_Score":0,"Tags":"python,matrix,numerical-methods","A_Id":8954011,"CreationDate":"2012-01-21T15:03:00.000","Title":"python: least-squares estimation?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i apologize in advance for not being very precise, as a i dont know the mathematical expression for what i want. \ni am using matplotlib to analyze a large dataset. What i have now is a distribution of x,y points. I want to find out the cases in which the x values of my function are the same, but y differs the greatest. So if i plot it, one part of the cases is at the top of my graph, the other is the botton of the graph. \nSo how do i get the points(x,y), (x,y') where f(x)=y and f(x)=y' and y-y'=max ?\ncheers","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":960,"Q_Id":9019949,"Users Score":1,"Answer":"I think what you want is a variance plot. Create a dictionary for distinct x values. Put each distinct value of y in a list associated with each x. Find the stdev (np.std) of the list associated with each x say \"s\". Plot the s vs. x.","Q_Score":2,"Tags":"python,statistics,matplotlib","A_Id":9020306,"CreationDate":"2012-01-26T14:52:00.000","Title":"Finding 'edge cases' in a dataset","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a csv file in which the first column contains an identifier and the second column associated data. The identifier is replicated an arbitrary number of times so the file looks like this.\ndata1,123\ndata1,345\ndata1,432\ndata2,654\ndata2,431\ndata3,947\ndata3,673 \nI would like to merge the records to generate a single record for each identifier and get.\ndata1,123,345,432\ndata2,654,431\ndata3,947,673 \nIs there an efficient way to do this in python or numpy? Dictionaries appear to be out due to duplicate keys. At the moment I have the lines in a list of lists then looping through and testing for identity with the previous value at index 0 in the list but this is very clumsy. Thanks for any help.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":267,"Q_Id":9027355,"Users Score":3,"Answer":"You can use a dictionary if the values are lists. defaultdict in the collections module is very useful for this.","Q_Score":3,"Tags":"python,merge,numpy","A_Id":9027774,"CreationDate":"2012-01-27T00:07:00.000","Title":"merging records in python or numpy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Please help me in creating columns and inserting rows under them in a csv file using python scrapy. I need to write scraped data into 3 columns. So first of all three columns are to be created and then data is to be entered in each row.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":506,"Q_Id":9030953,"Users Score":0,"Answer":"1)you can use custom Csv feed exporter each item key will be treated as a column heading and value as column value.\n2) you can write a pipeline that can write data in csv file using python csv lib.","Q_Score":0,"Tags":"python,scrapy","A_Id":9144538,"CreationDate":"2012-01-27T09:01:00.000","Title":"How to create columns in a csv file and insert row under them in python scrapy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Please help me in creating columns and inserting rows under them in a csv file using python scrapy. I need to write scraped data into 3 columns. So first of all three columns are to be created and then data is to be entered in each row.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":506,"Q_Id":9030953,"Users Score":1,"Answer":"CSV is a Comma Saparated Values format. That basically means that it is a text file with some strings separated by commas and line-downs.\nEach line down creates a row and each comma creates a column in that row.\nI guess the simplest way to create a CSV file would be to create a Pythonic dict where each key is a column and the value for each column is a list of rows where None stands for the obvious lack of value.\nYou can then fill in your dict by appending values to the requested column (thus adding a row) and then easily transform the dict into a CSV file by iterating over list indexes and for each column either add a VALUE, entry in the file or a , entry for index-out-of-bound or a None value for the corresponding list.\nFor each row add a line down.","Q_Score":0,"Tags":"python,scrapy","A_Id":9032800,"CreationDate":"2012-01-27T09:01:00.000","Title":"How to create columns in a csv file and insert row under them in python scrapy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Say I have a huge numpy matrix A taking up tens of gigabytes. It takes a non-negligible amount of time to allocate this memory.\nLet's say I also have a collection of scipy sparse matrices with the same dimensions as the numpy matrix. Sometimes I want to convert one of these sparse matrices into a dense matrix to perform some vectorized operations.\nCan I load one of these sparse matrices into A rather than re-allocate space each time I want to convert a sparse matrix into a dense matrix? The .toarray() method which is available on scipy sparse matrices does not seem to take an optional dense array argument, but maybe there is some other way to do this.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":995,"Q_Id":9041236,"Users Score":1,"Answer":"It does seem like there should be a better way to do this (and I haven't scoured the documentation), but you could always loop over the elements of the sparse array and assign to the dense array (probably zeroing out the dense array first). If this ends up too slow, that seems like an easy C extension to write....","Q_Score":1,"Tags":"python,numpy,scipy,numerical-computing","A_Id":9042475,"CreationDate":"2012-01-27T23:03:00.000","Title":"Load sparse scipy matrix into existing numpy dense matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm preparing a set of reports using open source ReportLab. The reports contain a number of charts. Everything works well so far.\nI've been asked to take a (working) bar chart that shows two series of data and overlay a fitted curve for each series.\nI can see how I could overlay a segmented line on the bar graph by creating both a line chart and bar chart in the same ReportLab drawing. I can't find any reference for fitted curves in ReportLab, however.\nDoes anyone have any insight into plotting a fitted curve to a series of data in ReportLab or, failing that, a suggestion about how to accomplish this task (I'm thinking that chart would need to be produced in matplotlib instead)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":447,"Q_Id":9045888,"Users Score":1,"Answer":"I would recommend using MatPlotLib. This is exactly the sort of thing it's designed to handle and it will be much easier than trying to piece together something in ReportLab alone, especially since you'll have to do all the calculation of the line on your own and figure out the details of how to draw it in just the right place. MatPlotLib integrates easily with ReportLab; I've used the combination several times with great results.","Q_Score":0,"Tags":"python,charts,reportlab,curve-fitting","A_Id":9046880,"CreationDate":"2012-01-28T14:16:00.000","Title":"Fitted curve on chart using ReportLab","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is any method\/function in the python wrapper of Opencv that finds black areas in a binary image? (like regionprops in Matlab)\nUp to now I load my source image, transform it into a binary image via threshold and then invert it to highlight the black areas (that now are white).\nI can't use third party libraries such as cvblobslob or cvblob","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":26428,"Q_Id":9056646,"Users Score":0,"Answer":"I know this is an old question, but for completeness I wanted to point out that cv2.moments() will not always work for small contours. In this case, you can use cv2.minEnclosingCircle() which will always return the center coordinates (and radius), even if you have only a single point. Slightly more resource-hungry though, I think...","Q_Score":14,"Tags":"python,opencv,colors,detection,threshold","A_Id":66779646,"CreationDate":"2012-01-29T20:51:00.000","Title":"Python OpenCV - Find black areas in a binary image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is any method\/function in the python wrapper of Opencv that finds black areas in a binary image? (like regionprops in Matlab)\nUp to now I load my source image, transform it into a binary image via threshold and then invert it to highlight the black areas (that now are white).\nI can't use third party libraries such as cvblobslob or cvblob","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":26428,"Q_Id":9056646,"Users Score":0,"Answer":"Transform it to binary image using threshold with the CV_THRESH_BINARY_INV flag, you get threshold + inversion in one step.","Q_Score":14,"Tags":"python,opencv,colors,detection,threshold","A_Id":15779624,"CreationDate":"2012-01-29T20:51:00.000","Title":"Python OpenCV - Find black areas in a binary image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is any method\/function in the python wrapper of Opencv that finds black areas in a binary image? (like regionprops in Matlab)\nUp to now I load my source image, transform it into a binary image via threshold and then invert it to highlight the black areas (that now are white).\nI can't use third party libraries such as cvblobslob or cvblob","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":26428,"Q_Id":9056646,"Users Score":2,"Answer":"After inverting binary image to turn black to white areas, apply cv.FindContours function. It will give you boundaries of the region you need.\nLater you can use cv.BoundingRect to get minimum bounding rectangle around region. Once you got the rectangle vertices, you can find its center etc.\nOr to find centroid of region, use cv.Moment function after finding contours. Then use cv.GetSpatialMoments in x and y direction. It is explained in opencv manual. \nTo find area, use cv.ContourArea function.","Q_Score":14,"Tags":"python,opencv,colors,detection,threshold","A_Id":9058880,"CreationDate":"2012-01-29T20:51:00.000","Title":"Python OpenCV - Find black areas in a binary image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a scientific application that reads a potentially huge data file from disk and transforms it into various Python data structures such as a map of maps, list of lists etc. NumPy is called in for numerical analysis. The problem is, the memory usage can grow rapidly. As swap space is called in, the system slows down significantly. The general strategy I have seen:\n\nlazy initialization: this doesn't seem to help in the sense that many operations require in memory data anyway. \nshelving: this Python standard library seems support writing data object into a datafile (backed by some db) . My understanding is that it dumps data to a file, but if you need it, you still have to load all of them into memory, so it doesn't exactly help. Please correct me if this is a misunderstanding.\nThe third option is to leverage a database, and offload as much data processing to it\n\nAs an example: a scientific experiment runs several days and have generated a huge (tera bytes of data) sequence of:\n\nco-ordinate(x,y) observed event E at time t.\n\nAnd we need to compute a histogram over t for each (x,y) and output a 3-dimensional array.\nAny other suggestions? I guess my ideal case would be the in-memory data structure can be phased to disk based on a soft memory limit and this process should be as transparent as possible. Can any of these caching frameworks help? \nEdit:\nI appreciate all the suggested points and directions. Among those, I found user488551's comments to be most relevant. As much as I like Map\/Reduce, to many scientific apps, the setup and effort for parallelization of code is even a bigger problem to tackle than my original question, IMHO. It is difficult to pick an answer as my question itself is so open ... but Bill's answer is more close to what we can do in real world, hence the choice. Thank you all.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":294,"Q_Id":9071031,"Users Score":1,"Answer":"Well, if you need the whole dataset in RAM, there's not much to do but get more RAM. Sounds like you aren't sure if you really need to, but keeping all the data resident requires the smallest amount of thinking :) \nIf your data comes in a stream over a long period of time, and all you are doing is creating a histogram, you don't need to keep it all resident. Just create your histogram as you go along, write the raw data out to a file if you want to have it available later, and let Python garbage collect the data as soon as you have bumped your histogram counters. All you have to keep resident is the histogram itself, which should be relatively small.","Q_Score":3,"Tags":"python","A_Id":9071479,"CreationDate":"2012-01-30T21:29:00.000","Title":"How to handle large memory footprint in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a scientific application that reads a potentially huge data file from disk and transforms it into various Python data structures such as a map of maps, list of lists etc. NumPy is called in for numerical analysis. The problem is, the memory usage can grow rapidly. As swap space is called in, the system slows down significantly. The general strategy I have seen:\n\nlazy initialization: this doesn't seem to help in the sense that many operations require in memory data anyway. \nshelving: this Python standard library seems support writing data object into a datafile (backed by some db) . My understanding is that it dumps data to a file, but if you need it, you still have to load all of them into memory, so it doesn't exactly help. Please correct me if this is a misunderstanding.\nThe third option is to leverage a database, and offload as much data processing to it\n\nAs an example: a scientific experiment runs several days and have generated a huge (tera bytes of data) sequence of:\n\nco-ordinate(x,y) observed event E at time t.\n\nAnd we need to compute a histogram over t for each (x,y) and output a 3-dimensional array.\nAny other suggestions? I guess my ideal case would be the in-memory data structure can be phased to disk based on a soft memory limit and this process should be as transparent as possible. Can any of these caching frameworks help? \nEdit:\nI appreciate all the suggested points and directions. Among those, I found user488551's comments to be most relevant. As much as I like Map\/Reduce, to many scientific apps, the setup and effort for parallelization of code is even a bigger problem to tackle than my original question, IMHO. It is difficult to pick an answer as my question itself is so open ... but Bill's answer is more close to what we can do in real world, hence the choice. Thank you all.","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":294,"Q_Id":9071031,"Users Score":3,"Answer":"Have you considered divide and conquer? Maybe your problem lends itself to that. One framework you could use for that is Map\/Reduce.\nDoes your problem have multiple phases such that Phase I requires some data as input and generates an output which can be fed to phase II? In that case you can have 1 process do phase I and generate data for phase II. Maybe this will reduce the amount of data you simultaneously need in memory?\nCan you divide your problem into many small problems and recombine the solutions? In this case you can spawn multiple processes that each handle a small sub-problem and have one or more processes to combine these results in the end?\nIf Map-Reduce works for you look at the Hadoop framework.","Q_Score":3,"Tags":"python","A_Id":9071108,"CreationDate":"2012-01-30T21:29:00.000","Title":"How to handle large memory footprint in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"May seem stupid, but after using Matlab for a while (a couple of years), I've tried Python, and despite some Matlab's features that are really handy, I really like Python.\nNow, for work, I'm using Matlab again, and sometimes I miss a structure like Python's 'for' loop. Instead of using the standard 'for' that Matlab provides, there is a structure more similar to process batches of similar data?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":273,"Q_Id":9077225,"Users Score":1,"Answer":"In addition to the given answer, be aware that MATLAB's forloop is very slow.\nMaybe programming in a functional style using arrayfun, cellfun() and structfun() might be a handier solution, and quite close to Python's map().","Q_Score":1,"Tags":"python,matlab,for-loop","A_Id":9079959,"CreationDate":"2012-01-31T09:31:00.000","Title":"For cycle in Python's way in Matlab","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Python has a lot of convenient data structures (lists, tuples, dicts, sets, etc) which can be used to make other 'conventional' data structures (Eg, I can use a Python list to create a stack and a collections.dequeue to make a queue, dicts to make trees and graphs, etc).\nThere are even third-party data structures that can be used for specific tasks (for instance the structures in Pandas, pytables, etc).\nSo, if I know how to use lists, dicts, sets, etc, should I be able to implement any arbitrary data structure if I know what it is supposed to accomplish?\nIn other words, what kind of data structures can the Python data structures not be used for?\nThanks","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2410,"Q_Id":9091252,"Users Score":0,"Answer":"Given that all data structures exist in memory, and memory is effectively just a list (array)... there is no data structure that couldn't be expressed in terms of the basic Python data structures (with appropriate code to interact with them).","Q_Score":2,"Tags":"python,data-structures","A_Id":9091268,"CreationDate":"2012-02-01T05:31:00.000","Title":"Data structures with Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Our company has developed Python libraries to open and display data from files using our proprietary file format. The library only depends on numpy which has been ported to IronPython.\nThe setup.py for our internal distribution imports from setuptools but apparently this is not yet supported in IronPython. Searching the wirenet produces many references to a blog by Jeff Hardy that was written three years ago.\nCan someone explain the relationship between setuptools, ez_install, and distutils?\nIs there a way to distribute our library that is compatible with both CPython and IronPython.\nMany thanks,\nKenny","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":919,"Q_Id":9134717,"Users Score":1,"Answer":"Distribute is a fork of setuptools that supports Python 3, among other things. ez_install is used to install setuptools\/easy_install, and then easy_install can be used to install packages (although pip is better).\nThree years ago IronPython was missing a lot of the pieces needed, like zlib (2.7.0) and zipimport (upcoming 2.7.2). I haven't checked in a while to see it works, though, but any changes now should be minor.","Q_Score":0,"Tags":"ironpython,setuptools","A_Id":9136461,"CreationDate":"2012-02-03T19:59:00.000","Title":"IronPython and setuptools\/ez_install","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use graphviz in order to draw for a given graph all the maximal cliques that it has.\nTherefore I would like that nodes in the same maximal clique will be visually encapsulated together (meaning that I would like that a big circle will surround them). I know that the cluster option exists - but in all the examples that I saw so far - each node is in one cluster only. In the maximal clique situation, a node can be in multiple cliques.\nIs there an option to visualize this with graphviz?\nIf not, are there any other tools for this task (preferably with a python api).\nThank you.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3488,"Q_Id":9213797,"Users Score":0,"Answer":"I don't think you can do this. Clusters are done via subgraphs, which are expected to be separate graphs, not overlapping with other subgraphs.\nYou could change the visualisation though; if you imagine that the members of a clique are members of some set S, then you could simply add a node S and add directed or dashed edges linking each member to the S node. If the S nodes are given a different shape, then it should be clear which nodes are in which cliques.\nIf you really want, you can give the edges connecting members to their clique node high weights, which should bring them close together on the graph.\nNote that there would never be edges between the clique nodes; that would indicate that two cliques are maximally connected, which just implies they are in fact one large clique, not two separate ones.","Q_Score":8,"Tags":"python,graphviz","A_Id":9215067,"CreationDate":"2012-02-09T15:33:00.000","Title":"Graphviz - Drawing maximal cliques","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having to do a lot of vision related work in Python lately, and I am facing a lot of difficulties switching between formats. When I read an image using Mahotas, I cannot seem to get it to cv2, though they are both using numpy.ndarray. SimpleCV can take OpenCV images easily, but getting SimpleCV image out for legacy cv or mahotas seems to be quite a task.\nSome format conversion syntaxes would be really appreciated. For example, if I open a greyscale image using mahotas, it is treated to be in floating point colour space by default, as I gather. Even when I assign the type as numpy.uint8, cv2 cannot seem to recognise it as an array. I do not know how to solve this problem. I am not having much luck with colour images either. I am using Python 2.7 32bit on Ubuntu Oneiric Ocelot.\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":5140,"Q_Id":9220720,"Users Score":2,"Answer":"I have never used mahotas. But I'm currently working on SimpleCV. I have just sent a pull request for making SimpleCV numpy array compatible with cv2.\nSo, basically,\n\nImage.getNumpy() -> numpy.ndarray for cv2\nImage.getBitmap() -> cv2.cv.iplimage\nImage.getMatrix() -> cv2.cv.cvmat\n\nTo convert cv2 numpy array to SimpleCV Image object,\n\nImage(cv2_image) -> SimpleCV.ImageClass.Image","Q_Score":5,"Tags":"python,opencv,python-2.7,simplecv,mahotas","A_Id":11412849,"CreationDate":"2012-02-09T23:43:00.000","Title":"Image Conversion between cv2, cv, mahotas, and SimpleCV","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have many 100x100 grids, is there an efficient way using numpy to calculate the median for every grid point and return just one 100x100 grid with the median values? Presently, I'm using a for loop to run through each grid point, calculating the median and then combining them into one grid at the end. I'm sure there's a better way to do this using numpy. Any help would be appreciated! Thanks!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1689,"Q_Id":9245466,"Users Score":0,"Answer":"How many grids are there? \nOne option would be to create a 3D array that is 100x100xnumGrids and compute the median across the 3rd dimension.","Q_Score":1,"Tags":"python,numpy,statistics","A_Id":9245481,"CreationDate":"2012-02-12T00:51:00.000","Title":"Efficient two dimensional numpy array statistics","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two sparse matrix A (affinity matrix) and D (Diagonal matrix) with dimension 100000*100000. I have to compute the Laplacian matrix L = D^(-1\/2)*A*D^(-1\/2). I am using scipy CSR format for sparse matrix.\nI didnt find any method to find inverse of sparse matrix. How to find L and inverse of sparse matrix? Also suggest that is it efficient to do so by using python or shall i call matlab function for calculating L?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1503,"Q_Id":9248821,"Users Score":1,"Answer":"In general the inverse of a sparse matrix is not sparse which is why you won't find sparse matrix inverters in linear algebra libraries. Since D is diagonal, D^(-1\/2) is trivial and the Laplacian matrix calculation is thus trivial to write down. L has the same sparsity pattern as A but each value A_{ij} is multiplied by (D_i*D_j)^{-1\/2}.\nRegarding the issue of the inverse, the standard approach is always to avoid calculating the inverse itself. Instead of calculating L^-1, repeatedly solve Lx=b for the unknown x. All good matrix solvers will allow you to decompose L which is expensive and then back-substitute (which is cheap) repeatedly for each value of b.","Q_Score":3,"Tags":"python,linear-algebra,sparse-matrix,matrix-inverse","A_Id":9248907,"CreationDate":"2012-02-12T12:37:00.000","Title":"Python Sparse matrix inverse and laplacian calculation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to build something in python that can analyze an uploaded mp3 and generate the necessary data to build a waveform graphic. Everything I've found is much more complex than I need. Ultimately, I'm trying to build something like you'd see on SoundCloud.\nI've been looking into numpy and fft's, but it all seem more complicated than I need. What's the best approach to this? I'll build the actual graphic using canvas, so don't worry about that part of it, I just need the data to plot.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1275,"Q_Id":9254671,"Users Score":1,"Answer":"An MP3 file is an encoded version of a waveform. Before you can work with the waveform, you must first decode the MP3 data into a PCM waveform. Once you have PCM data, each sample represents the waveform's amplitude at the point in time. If we assume an MP3 decoder outputs signed, 16-bit values, your amplitudes will range from -16384 to +16383. If you normalize the samples by dividing each by 16384, the waveform samples will then range between +\/- 1.0.\nThe issue really is one of MP3 decoding to PCM. As far as I know, there is no native python decoder. You can, however, use LAME, called from python as a subprocess or, with a bit more work, interface the LAME library directly to Python with something like SWIG. Not a trivial task.\nPlotting this data then becomes an exercise for the reader.","Q_Score":2,"Tags":"python,waveform","A_Id":9254885,"CreationDate":"2012-02-13T01:53:00.000","Title":"Generate volume curve from mp3","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a simple python script which plots some graphs in the same figure. All graphs are created by the draw() and in the end I call the show() function to block.\nThe script used to work with Python 2.6.6, Matplotlib 0.99.3, and Ubuntu 11.04. Tried to run it under Python 2.7.2, Matplotlib 1.0.1, and Ubuntu 11.10 but the show() function returns immediately without waiting to kill the figure.\nIs this a bug? Or a new feature and we'll have to change our scripts? Any ideas?\nEDIT: It does keep the plot open under interactive mode, i.e., python -i ..., but it used to work without that, and tried to have plt.ion() in the script and run it in normal mode but no luck.","AnswerCount":5,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":84640,"Q_Id":9280171,"Users Score":52,"Answer":"I think that using show(block=True) should fix your problem.","Q_Score":55,"Tags":"python,matplotlib","A_Id":9280538,"CreationDate":"2012-02-14T16:06:00.000","Title":"Matplotlib python show() returns immediately","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a scientific model which I am running in Python which produces a lookup table as output. That is, it produces a many-dimensional 'table' where each dimension is a parameter in the model and the value in each cell is the output of the model.\nMy question is how best to store this lookup table in Python. I am running the model in a loop over every possible parameter combination (using the fantastic itertools.product function), but I can't work out how best to store the outputs.\nIt would seem sensible to simply store the output as a ndarray, but I'd really like to be able to access the outputs based on the parameter values not just indices. For example, rather than accessing the values as table[16][5][17][14] I'd prefer to access them somehow using variable names\/values, for example:\ntable[solar_z=45, solar_a=170, type=17, reflectance=0.37]\nor something similar to that. It'd be brilliant if I were able to iterate over the values and get their parameter values back - that is, being able to find out that table[16]... corresponds to the outputs for solar_z = 45.\nIs there a sensible way to do this in Python?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1600,"Q_Id":9280488,"Users Score":1,"Answer":"If you want to access the results by name, then you could use a python nested dictionary instead of ndarray, and serialize it in a .JSON text file using json module.","Q_Score":5,"Tags":"python,numpy","A_Id":9280574,"CreationDate":"2012-02-14T16:27:00.000","Title":"How to store numerical lookup table in Python (with labels)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In NLTK, if I write a NaiveBayes classifier for say movie reviews (determining if positive or negative), how can I determine the classifier \"certainty\" when classify a particular review? That is, I know how to run an 'accuracy' test on a given test set to see the general accuracy of the classifier. But is there anyway to have NLTk output its certainess? (perhaps on the basis on the most informative features...)\nThanks\nA","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":524,"Q_Id":9288221,"Users Score":1,"Answer":"I am not sure about the NLTK implementation of Naive Bayes, but the Naive Bayes algorithm outputs probabilities of class membership. However, they are horribly calibrated.\nIf you want good measures of certainty, you should use a different classification algorithm. Logistic regression will do a decent job at producing calibrated estimates.","Q_Score":1,"Tags":"python,classification,nltk,probability","A_Id":9300400,"CreationDate":"2012-02-15T05:16:00.000","Title":"NLTK certainty measure?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In NLTK, if I write a NaiveBayes classifier for say movie reviews (determining if positive or negative), how can I determine the classifier \"certainty\" when classify a particular review? That is, I know how to run an 'accuracy' test on a given test set to see the general accuracy of the classifier. But is there anyway to have NLTk output its certainess? (perhaps on the basis on the most informative features...)\nThanks\nA","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":524,"Q_Id":9288221,"Users Score":1,"Answer":"nltk.classify.util.log_likelihood. For this problem, you can also try measuring the results by precision, recall, F-score at the token level, that is, scores for positive and negative respectively.","Q_Score":1,"Tags":"python,classification,nltk,probability","A_Id":9300932,"CreationDate":"2012-02-15T05:16:00.000","Title":"NLTK certainty measure?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a simple 2d brownian motion simulator in Python. It's obviously easy to draw values for x displacement and y displacement from a distribution, but I have to set it up so that the 2d displacement (ie hypotenuse) is drawn from a distribution, and then translate this to new x and y coordinates. This is probably trivial and I'm just too far removed from trigonometry to remember how to do it correctly. Am I going to need to generate a value for the hypotenuse and then translate it into x and y displacements with sin and cos? (How do you do this correctly?)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":996,"Q_Id":9297679,"Users Score":0,"Answer":"If you have a hypotenuse in the form of a line segment, then you have two points. From two points in the form P0 = (x0, y0) P1 = (x1, y1) you can get the x and y displacements by subtracting x0 from x1 and y0 from y1.\nIf your hypotenuse is actually a vector in a polar coordinate plane, then yes, you'll have to take the sin of the angle and multiply it by the magnitude of the vector to get the y displacement and likewise with cos for the x displacement.","Q_Score":1,"Tags":"python,random,trigonometry,hypotenuse","A_Id":9297835,"CreationDate":"2012-02-15T16:57:00.000","Title":"2d random walk in python - drawing hypotenuse from distribution","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a simple 2d brownian motion simulator in Python. It's obviously easy to draw values for x displacement and y displacement from a distribution, but I have to set it up so that the 2d displacement (ie hypotenuse) is drawn from a distribution, and then translate this to new x and y coordinates. This is probably trivial and I'm just too far removed from trigonometry to remember how to do it correctly. Am I going to need to generate a value for the hypotenuse and then translate it into x and y displacements with sin and cos? (How do you do this correctly?)","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":996,"Q_Id":9297679,"Users Score":1,"Answer":"This is best done by using polar coordinates (r, theta) for your distributions (where r is your \"hypotenuse\")), and then converting the result to (x, y), using x = r cos(theta) and y = r sin(theta). That is, select r from whatever distribution you like, and then select a theta, usually from a flat, 0 to 360 deg, distribution, and then convert these values to x and y.\nGoing the other way around (i.e., constructing correlated (x, y) distributions that gave a direction independent hypotenuse) would be very difficult.","Q_Score":1,"Tags":"python,random,trigonometry,hypotenuse","A_Id":9298238,"CreationDate":"2012-02-15T16:57:00.000","Title":"2d random walk in python - drawing hypotenuse from distribution","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently trying to create a Neural Network with pybrain for stock price forecasting. Up to now I have only used Networks with a binary output. For those Networks sigmoid inner layers were sufficient but I don't think this would be the right approach for Forecasting a price.\nThe problem is, that when I create such a completely linear network I always get an error like\n\nRuntimeWarning: overflow encountered in square while backprop training.\n\nI already scaled down the inputs. Could it be due to the size of my training sets (50000 entries per training set)?\nHas anyone done something like this before?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1055,"Q_Id":9340677,"Users Score":1,"Answer":"Try applying log() to the price-attribute - then scale all inputs and outputs to [-1..1] - of course, when you want to get the price from the network-output you'll have to reverse log() with exp()","Q_Score":1,"Tags":"python,neural-network,backpropagation,forecasting,pybrain","A_Id":9349317,"CreationDate":"2012-02-18T11:15:00.000","Title":"Pybrain: Completely linear network","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What are the best algorithms for Word-Sense-Disambiguation\nI read a lot of posts, and each one proves in a research document that a specific algorithm is the best, this is very confusing.\nI just come up with 2 realizations 1-Lesk Algorithm is deprecated, 2-Adapted Lesk is good but not the best\nPlease if anybody based on his (Experience) know any other good algorithm that give accuracy up to say 70% or more please mention it . and if there's a link to any Pseudo Code for the algorithm it'll be great, I'll try to implement it in Python or Java .","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1301,"Q_Id":9355460,"Users Score":1,"Answer":"Well, WSD is an open problem (since it's language... and AI...), so currently each of those claims are in some sense valid. If you are engaged in a domain-specific project, I think you'd be best served by a statistical method (Support Vector Machines) if you can find a proper corpus. Personally, if you're using python, unless you're attempting to do some significant original research, I think you should just use the NLTK module to accomplish whatever you're trying to do.","Q_Score":0,"Tags":"python,nlp,nltk,text-processing","A_Id":9355945,"CreationDate":"2012-02-20T02:25:00.000","Title":"What are the best algorithms for Word-Sense-Disambiguation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to have missing values in scikit-learn ? How should they be represented? I couldn't find any documentation about that.","AnswerCount":7,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":39190,"Q_Id":9365982,"Users Score":20,"Answer":"I wish I could provide a simple example, but I have found that RandomForestRegressor does not handle NaN's gracefully. Performance gets steadily worse when adding features with increasing percentages of NaN's. Features that have \"too many\" NaN's are completely ignored, even when the nan's indicate very useful information.\nThis is because the algorithm will never create a split on the decision \"isnan\" or \"ismissing\". The algorithm will ignore a feature at a particular level of the tree if that feature has a single NaN in that subset of samples. But, at lower levels of the tree, when sample sizes are smaller, it becomes more likely that a subset of samples won't have a NaN in a particular feature's values, and a split can occur on that feature.\nI have tried various imputation techniques to deal with the problem (replace with mean\/median, predict missing values using a different model, etc.), but the results were mixed.\nInstead, this is my solution: replace NaN's with a single, obviously out-of-range value (like -1.0). This enables the tree to split on the criteria \"unknown-value vs known-value\". However, there is a strange side-effect of using such out-of-range values: known values near the out-of-range value could get lumped together with the out-of-range value when the algorithm tries to find a good place to split. For example, known 0's could get lumped with the -1's used to replace the NaN's. So your model could change depending on if your out-of-range value is less than the minimum or if it's greater than the maximum (it could get lumped in with the minimum value or maximum value, respectively). This may or may not help the generalization of the technique, the outcome will depend on how similar in behavior minimum- or maximum-value samples are to NaN-value samples.","Q_Score":34,"Tags":"python,machine-learning,scikit-learn,missing-data,scikits","A_Id":17582671,"CreationDate":"2012-02-20T17:56:00.000","Title":"Missing values in scikits machine learning","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to have missing values in scikit-learn ? How should they be represented? I couldn't find any documentation about that.","AnswerCount":7,"Available Count":3,"Score":0.0285636566,"is_accepted":false,"ViewCount":39190,"Q_Id":9365982,"Users Score":1,"Answer":"When you run into missing values on input features, the first order of business is not how to impute the missing. The most important question is WHY SHOULD you. Unless you have clear and definitive mind what the 'true' reality behind the data is, you may want to curtail urge to impute. This is not about technique or package in the first place. \nHistorically we resorted to tree methods like decision trees mainly because some of us at least felt that imputing missing to estimate regression like linear regression, logistic regression, or even NN is distortive enough that we should have methods that do not require imputing missing 'among the columns'. The so-called missing informativeness. Which should be familiar concept to those familiar with, say, Bayesian. \nIf you are really modeling on big data, besides talking about it, the chance is you face large number of columns. In common practice of feature extraction like text analytics, you may very well say missing means count=0. That is fine because you know the root cause. The reality, especially when facing structured data sources, is you don't know or simply don't have time to know the root cause. But your engine forces to plug in a value, be it NAN or other place holders that the engine can tolerate, I may very well argue your model is as good as you impute, which does not make sense. \nOne intriguing question is : if we leave missingness to be judged by its close context inside the splitting process, first or second degree surrogate, does foresting actually make the contextual judgement a moot because the context per se is random selection? This, however, is a 'better' problem. At least it does not hurt that much. It certainly should make preserving missingness unnecessary.\nAs a practical matter, if you have large number of input features, you probably cannot have a 'good' strategy to impute after all. From the sheer imputation perspective, the best practice is anything but univariate. Which is in the contest of RF pretty much means to use the RF to impute before modeling with it. \nTherefore, unless somebody tells me (or us), \"we are not able to do that\", I think we should enable carrying forward missing 'cells', entirely bypassing the subject of how 'best' to impute.","Q_Score":34,"Tags":"python,machine-learning,scikit-learn,missing-data,scikits","A_Id":48199308,"CreationDate":"2012-02-20T17:56:00.000","Title":"Missing values in scikits machine learning","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to have missing values in scikit-learn ? How should they be represented? I couldn't find any documentation about that.","AnswerCount":7,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":39190,"Q_Id":9365982,"Users Score":11,"Answer":"I have come across very similar issue, when running the RandomForestRegressor on data. The presence of NA values were throwing out \"nan\" for predictions. From scrolling around several discussions, the Documentation by Breiman recommends two solutions for continuous and categorical data respectively.\n\nCalculate the Median of the data from the column(Feature) and use\nthis (Continuous Data) \nDetermine the most frequently occurring Category and use this\n(Categorical Data)\n\nAccording to Breiman the random nature of the algorithm and the number of trees will allow for the correction without too much effect on the accuracy of the prediction. This I feel would be the case if the presence of NA values is sparse, a feature containing many NA values I think will most likely have an affect.","Q_Score":34,"Tags":"python,machine-learning,scikit-learn,missing-data,scikits","A_Id":18020591,"CreationDate":"2012-02-20T17:56:00.000","Title":"Missing values in scikits machine learning","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anybody know a python function (proven to work and having its description in internet) which able to make minimum search for a provided user function when argument is an array of integers?\nSomething like\nscipy.optimize.fmin_l_bfgs_b\nscipy.optimize.leastsq\nbut for integers","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":569,"Q_Id":9367630,"Users Score":0,"Answer":"There is no general solution for this problem. If you know the properties of the function it should be possible to deduce some bounds for the variables and then test all combinations. But that is not very efficient.\nYou could approximate a solution with scipy.optimize.leastsq and then round the results to integers. The quality of the result of course depends on the structure of the function.","Q_Score":0,"Tags":"python,numpy,scipy","A_Id":9367777,"CreationDate":"2012-02-20T20:01:00.000","Title":"Optimizer\/minimizer for integer argument","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been working with numpy and needed the random.choice() function. Sadly, in version 2.0 it's not in the random or the random.mtrand.RandomState modules. Has it been excluded for a particular reason? There's nothing in the discussion or documentation about it!\nFor info, I'm running Numpy 2.0 on python 2.7 on mac os. All installed from the standard installers provided on the sites.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6270,"Q_Id":9374885,"Users Score":8,"Answer":"random.choice is as far as I can tell part of python itself, not of numpy. Did you import random?\nUpdate: numpy 1.7 added a new function, numpy.random.choice. Obviously, you need numpy 1.7 for it.\nUpdate2: it seems that in unreleased numpy 2.0, this was temporarily called numpy.random.sample. It has been renamed back. Which is why when using unreleased versions, you really should have a look at the API (pydoc numpy.random) and changelogs.","Q_Score":8,"Tags":"python,numpy,scipy","A_Id":9375030,"CreationDate":"2012-02-21T09:17:00.000","Title":"Why has the numpy random.choice() function been discontinued?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anybody know how much memory is used by a numpy ndarray? (with let's say 10,000,000 float elements).","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":14086,"Q_Id":9395758,"Users Score":0,"Answer":"I gauss, easily, we can compute by print(a.size \/\/ 1024 \/\/ 1024, a.dtype)\nit is similar to how much MB is uesd, however with the param dtype, float=8B, int8=1B ...","Q_Score":23,"Tags":"python,arrays,memory,numpy,floating-point","A_Id":62791902,"CreationDate":"2012-02-22T13:29:00.000","Title":"How much memory is used by a numpy ndarray?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using a server where I don't have administrative rights and I need to use the latest version of numpy. The system administrator insists that he cannot update the global numpy to the latest version, so I have to install it locally. \nI can do that without trouble, but how do I make sure that \"import numpy\" results in the newer local install to be imported, as opposed to the older global version? I can adjust my PYTHONPATH, but I will want to use some of the global imports as well so I can't exclude all the global packages.\nI'm on CentOS 6, by the way.\nThanks!","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1857,"Q_Id":9419327,"Users Score":1,"Answer":"Python searches the path in order, so simply put the directory where you installed your NumPy first in the path.\nYou can check numpy.version.version to make sure you're getting the version you want.","Q_Score":1,"Tags":"python,numpy,centos","A_Id":9419462,"CreationDate":"2012-02-23T18:53:00.000","Title":"have local numpy override global","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I'm designing a matrix for a computer vision project and believe I have one of my calculations wrong. Unfortunately, I'm not sure where it's wrong. \nI was considering creating a matrix that was 100,000,000 x 100,000,000 with each 'cell' containing a single integer (1 or 0). If my calculations are correct, it would take 9.53674316 \u00d7 10^9 MB. Is that correct?!?\nMy next question is, if it IS correct, are there ways to reduce memory requirements to a more realistic level while still keeping the matrix the same size? Of course, there is a real possibility I won't actually need a matrix that size but this is absolute worse case scenario (as put forth by a friend). The size seems ridiculous to me since we'd be covering such a small distance at a time.\nThanks1\nAnthony","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":793,"Q_Id":9455651,"Users Score":4,"Answer":"In theory, an element of {0, 1} should consume at most 1 bit per cell. That means 8 cells per byte or 1192092895 megabytes or about one petabyte, which is too much, unless you are google :) Not to mention, even processing (or saving) such matrix would take too much time (about a year I'd say).\nYou said that in many cases you won't even need matrix so large. So, you can create smaller matrix at start (10,000 x 10,000) and then double the size every time enlargment is needed, copying old contents.\nIf your matrix is sparse (has much much more 1's than 0's or vice-versa), then it is much more efficient to store just coordinates where ones are in some efficient data structure, depending what operations (search, data access) you need.\nSide note: In many languages, you have to take proper care for that to be true, for example in C, even if you specify variable as boolean, it still takes one byte, 8 times as much as needed.","Q_Score":2,"Tags":"python,matrix","A_Id":9455730,"CreationDate":"2012-02-26T18:05:00.000","Title":"RAM requirements for matrix processing","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have photo images of galaxies. There are some unwanted data on these images (like stars or aeroplane streaks) that are masked out. I don't just want to fill the masked areas with some mean value, but to interpolate them according to surrounding data. How do i do that in python? \nWe've tried various functions in SciPy.interpolate package: RectBivariateSpline, interp2d, splrep\/splev, map_coordinates, but all of them seem to work in finding new pixels between existing pixels, we were unable to make them fill arbitrary \"hole\" in data.","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":5262,"Q_Id":9478347,"Users Score":2,"Answer":"What you want is not interpolation at all. Interpolation depends on the assumption that data between known points is roughly contiguous. In any non-trivial image, this will not be the case.\nYou actually want something like the content-aware fill that is in Photoshop CS5. There is a free alternative available in The GIMP through the GIMP-resynthesize plugin. These filters are extremely advanced and to try to re-implement them is insane. A better choice would be to figure out how to use GIMP-resynthesize in your program instead.","Q_Score":7,"Tags":"python,image-processing,interpolation,mask,astronomy","A_Id":9478656,"CreationDate":"2012-02-28T08:05:00.000","Title":"How do i fill \"holes\" in an image?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just installed Scipy and Numpy on my machine and added them to the System Library option in eclipse.\nNow the program runs fine, but eclipse editor keeps giving this red mark on the side says \"Unresolved import\".\nI guess I didn't configure correctly.\nAny one know how to fix this ?\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":611,"Q_Id":9500524,"Users Score":1,"Answer":"Try to recreate your project in PyDev and add these new libraries.","Q_Score":1,"Tags":"python,eclipse,scipy","A_Id":9500809,"CreationDate":"2012-02-29T14:02:00.000","Title":"Eclipse editor doesn't recognize Scipy content","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I heard that computation results can be very sensitive to choice of random number generator.\n1 I wonder whether it is relevant to program own Mersenne-Twister or other pseudo-random routines to get a good number generator. Also, I don't see why I should not trust native or library generators as random.uniform() in numpy, rand() in C++. I understand that I can build generators on my own for distributions other than uniform (inverse repartition function methor, polar method). But is it evil to use one built-in generator for uniform sampling?\n2 What is wrong with the default 'time' seed? Should one re-seed and how frequently in a code sample (and why)?\n3 Maybe you have some good links on these topics!\n--edit More precisely, I need random numbers for multistart optimization routines, and for uniform space sample to initialize some other optimization routine parameters. I also need random numbers for Monte Carlo methods (sensibility analysis). I hope the precisions help figure out the scope of question.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":3275,"Q_Id":9523570,"Users Score":1,"Answer":"At least in C++, rand is sometimes rather poor quality, so code should rarely use it for anything except things like rolling dice or shuffling cards in children's games. In C++ 11, however, a set of random number generator classes of good quality have been added, so you should generally use them by preference.\nSeeding based on time can work fine under some circumstances, but not if you want to make it difficult for somebody else to duplicate the same series of numbers (e.g., if you're generating nonces for encryption). Normally, you want to seed only once at the beginning of the program, at least in a single-threaded program. With multithreading, you frequently want a separate seed for each thread, in which case you need each one to start out unique to prevent generating the same sequences in all threads.","Q_Score":2,"Tags":"c++,python,random,simulation,probability","A_Id":9523685,"CreationDate":"2012-03-01T20:31:00.000","Title":"Random number generation with C++ or Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a possibility to retrieve random rows from Cassandra (using it with Python\/Pycassa)?\nUpdate: With random rows I mean randomly selected rows!","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2264,"Q_Id":9566060,"Users Score":1,"Answer":"You might be able to do this by making a get_range request with a random start key (just a random string), and a row_count of 1. \nFrom memory, I think the finish key would need to be the same as start, so that the query 'wraps around' the keyspace; this would normally return all rows, but the row_count will limit that.\nHaven't tried it but this should ensure you get a single result without having to know exact row keys.","Q_Score":3,"Tags":"python,cassandra,uuid,pycassa","A_Id":9567221,"CreationDate":"2012-03-05T11:45:00.000","Title":"Cassandra\/Pycassa: Getting random rows","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently opening CSV files in Excel with multiple columns, where values will only appear if a number changes. For example, the data may ACTUALLY be: 90,90,90,90,91. But it will only appear as 90,,,,91. I'd really like the values in between to be filled with 90s. Is there anyway python could help with this? I really appreciate your help!","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2496,"Q_Id":9570157,"Users Score":0,"Answer":"Can you see the missing values when you open the CSV with wordpad? If so, then Python or any other scripting language should see them too.","Q_Score":0,"Tags":"python,excel,csv","A_Id":9570191,"CreationDate":"2012-03-05T16:23:00.000","Title":"Blank Values in Excel File From CSV (not just rows)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently opening CSV files in Excel with multiple columns, where values will only appear if a number changes. For example, the data may ACTUALLY be: 90,90,90,90,91. But it will only appear as 90,,,,91. I'd really like the values in between to be filled with 90s. Is there anyway python could help with this? I really appreciate your help!","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2496,"Q_Id":9570157,"Users Score":0,"Answer":"You can also do this entirely in excel:\nSelect column (or whatever range you're working with), then go to Edit>Go To (Ctrl+G) and click Special. \nCheck Blanks & click OK.\nThis will select only the empty cells within the list. \nNow type the = key, then up arrow and ctrl-enter.\nThis will put a formula in every blank cell to equal the cell above it. You could then copy ^ paste values only to get rid of the formulas.","Q_Score":0,"Tags":"python,excel,csv","A_Id":9570477,"CreationDate":"2012-03-05T16:23:00.000","Title":"Blank Values in Excel File From CSV (not just rows)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a document d1 consisting of lines of form user_id tag_id.\nThere is another document d2 consisting of tag_id tag_name\nI need to generate clusters of users with similar tagging behaviour.\nI want to try this with k-means algorithm in python.\nI am completely new to this and cant figure out how to start on this.\nCan anyone give any pointers?\nDo I need to first create different documents for each user using d1 with his tag vocabulary?\nAnd then apply k-means algorithm on these documents?\nThere are like 1 million users in d1. I am not sure I am thinking in right direction, creating 1 million files ?","AnswerCount":4,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2252,"Q_Id":9595494,"Users Score":4,"Answer":"Since the data you have is binary and sparse (in particular, not all users have tagged all documents, right)? So I'm not at all convinced that k-means is the proper way to do this.\nAnyway, if you want to give k-means a try, have a look at the variants such as k-medians (which won't allow \"half-tagging\") and convex\/spherical k-means (which supposedly works better with distance functions such as cosine distance, which seems a lot more appropriate here).","Q_Score":3,"Tags":"python,tags,cluster-analysis,data-mining,k-means","A_Id":9597117,"CreationDate":"2012-03-07T03:43:00.000","Title":"Clustering using k-means in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to solve linear equations system using Jython, so I can't use Num(Sci)Py for this purpose. What are the good alternatives?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2868,"Q_Id":9611746,"Users Score":1,"Answer":"As suggested by @talonmies' comment, the real answer to this is 'find an equivalent Java package.'","Q_Score":1,"Tags":"python,numpy,scipy,jython,linear-algebra","A_Id":9646517,"CreationDate":"2012-03-08T01:31:00.000","Title":"Solve linear system in Python without NumPy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to get my code (running in eclipse) to import pandas.\nI get the following error: \"ImportError: numpy.core.multiarray failed to import\"when I try to import pandas. I'm using python2.7, pandas 0.7.1, and numpy 1.5.1","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":5833,"Q_Id":9641916,"Users Score":0,"Answer":"@user248237:\nI second Keith's suggestion that its probably a 32\/64 bit compatibility issue. I ran into the same problem just this week while trying to install a different module. Check the versions of each of your modules and make everything matches. In general, I would stick to the 32 bit versions -- not all modules have official 64 bit support. I uninstalled my 64 bit version of python and replaced it with a 32 bit one, reinstalled the modules, and haven't had any problems since.","Q_Score":7,"Tags":"python,numpy,pandas","A_Id":11955915,"CreationDate":"2012-03-09T22:36:00.000","Title":"Python Pandas: can't find numpy.core.multiarray when importing pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to get my code (running in eclipse) to import pandas.\nI get the following error: \"ImportError: numpy.core.multiarray failed to import\"when I try to import pandas. I'm using python2.7, pandas 0.7.1, and numpy 1.5.1","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":5833,"Q_Id":9641916,"Users Score":1,"Answer":"Just to make sure:\n\nDid you install pandas from the sources ? Make sure it's using the version of NumPy you want.\nDid you upgrade NumPy after installing pandas? Make sure to recompile pandas, as there can be some changes in the ABI (but w\/ that version of NumPy, I doubt it's the case)\nAre you calling pandas and\/or Numpy from their source directory ? Bad idea, NumPy tends to choke on that.","Q_Score":7,"Tags":"python,numpy,pandas","A_Id":12003130,"CreationDate":"2012-03-09T22:36:00.000","Title":"Python Pandas: can't find numpy.core.multiarray when importing pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to get my code (running in eclipse) to import pandas.\nI get the following error: \"ImportError: numpy.core.multiarray failed to import\"when I try to import pandas. I'm using python2.7, pandas 0.7.1, and numpy 1.5.1","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":5833,"Q_Id":9641916,"Users Score":1,"Answer":"Try to update to numpy version 1.6.1. Helped for me!","Q_Score":7,"Tags":"python,numpy,pandas","A_Id":12007981,"CreationDate":"2012-03-09T22:36:00.000","Title":"Python Pandas: can't find numpy.core.multiarray when importing pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a numpy masked matrix. And wanted to do interpolation in the masked regions.\nI tried the RectBivariateSpline but it didn't recognize the masked regions as masked and used those points also to interpolate. I also tried the bisplrep after creating the X,Y,Z 1d vectors. They were each of length 45900. It took a lot of time to calculate the Bsplines. And finally gave a Segmentation fault while running bisplev .\nThe 2d matrix is of the size 270x170. \nIs there any way to make RectBivariateSpline not to include the masked regions in interpolation? Or is there any other method?\nbisplrep was too slow.\nThanking you,\nindiajoe\nUPDATE : \nWhen the grid is small the scipy.interpolate.Rbf with 'linear' function is doing reasonable job. But it gives error when the array is large.\nIs there any other function which will allow me to interpolate and smooth my matrix?\nI have also concluded the following. Do correct me if I am wrong.\n1) RectBivariateSpline requires perfect filled matrix and hence masked matrices cannot be used.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1646,"Q_Id":9644735,"Users Score":1,"Answer":"Very late, but...\nI have a problem similar to yours, and am getting the segmentation fault with bisplines, and also memory error with rbf (in which the \"thin_plate\" function works great for me.\nSince my data is unstructured but is created in a structured manner, I use downsampling to half or one third of the density of data points, so that I can use Rbf. What I advise you to do is (very inefficient, but still better than not doing at all) to subdivide the matrix in many overlapping regions, then create rbf interpolators for each region, then when you interpolate one point you choose the appropriate interpolator.\nAlso, if you have a masked array, you could still perform interpolation in the unmasked array, then apply the mask on the result. (well actually no, see the comments)\nHope this helps somebody","Q_Score":3,"Tags":"python,matrix,numpy,scipy,interpolation","A_Id":11876131,"CreationDate":"2012-03-10T07:29:00.000","Title":"Interpolation of large 2d masked array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have millions of images containing every day photos. I'm trying to find a way to pick out those in which some certain colours are present, say, red and orange, disregarding the shape or object. The size may matter - e.g., at least 50x50 px.\nIs there an efficient and lightweight library for achieving this? I know there is OpenCV and it seems quite powerful, but would it be too bloated for this task? It's a relatively simple task, right?\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2147,"Q_Id":9650670,"Users Score":1,"Answer":"I do not know if there is a library but you could segment these areas using a simple thresholding segmentation algorithm. Say, you want to find red spots. Extract the red channel from the image, select a threshold, and eliminate pixels that are below the threshold. The resulting pixels are your spots. To find a suitable threshold you can build the image's red channel histogram and find the valley there. The lowest point in the valley is the threshold that you could use. If there are more than one valley, smooth the histogram until there is one valley and two peaks. You can use a Gaussian function to smooth the histogram. To find the spots from the remaining pixels you can use the labeling algorithm and then find the connected components in the graph that the labeling algorithm produced. Yes, it is simple. :)","Q_Score":1,"Tags":"python,image-processing,computer-vision","A_Id":9651522,"CreationDate":"2012-03-10T22:18:00.000","Title":"Image spot detection in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"pymat doesnt seem to work with current versions of matlab, so I was wondering if there is another equivalent out there (I havent been able to find one). The gist of what would be desirable is running an m-file from python (2.6). (and alternatives such as scipy dont fit since I dont think they can run everything from the m-file). \nThanks in advance!","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":5746,"Q_Id":9675386,"Users Score":2,"Answer":"You can always start matlab as separate subprocess and collect results via std.out\/files. (see subprocess package).","Q_Score":2,"Tags":"python,matlab","A_Id":9675452,"CreationDate":"2012-03-12T21:56:00.000","Title":"Running m-files from Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a csv file where one of the columns is a date\/time string. How do I parse it correctly with pandas? I don't want to make that column the index. Thanks!\nUri","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1378,"Q_Id":9723000,"Users Score":7,"Answer":"Pass dateutil.parser.parse (or another datetime conversion function) in the converters argument to read_csv","Q_Score":6,"Tags":"python,parsing,datetime,pandas","A_Id":9739828,"CreationDate":"2012-03-15T15:33:00.000","Title":"How do I tell pandas to parse a particular column as a datetime object, but not make it an index?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"For a class project I am writing a simple matrix multiplier in Python. My professor has asked for it to be threaded. The way I handle this right now is to create a thread for every row and throw the result in another matrix.\nWhat I wanted to know if it would be faster that instead of creating a thread for each row it creates some amount threads that each handles various rows. \nFor example: given Matrix1 100x100 * Matrix2 100x100 (matrix sizes can vary widely):\n\n4 threads each handling 25 rows\n10 threads each handling 10 rows\n\nMaybe this is a problem of fine tuning or maybe the thread creation process overhead is still faster than the above distribution mechanism.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":251,"Q_Id":9731496,"Users Score":2,"Answer":"You will probably get the best performance if you use one thread for each CPU core available to the machine running your application. You won't get any performance benefit by running more threads than you have processors. \nIf you are planning to spawn new threads each time you perform a matrix multiplication then there is very little hope of your multi-threaded app ever outperforming the single-threaded version unless you are multiplying really huge matrices. The overhead involved in thread creation is just too high relative to the time required to multiply matrices. However, you could get a significant performance boost if you spawn all the worker threads once when your process starts and then reuse them over and over again to perform many matrix multiplications. \nFor each pair of matrices you want to multiply you will want to load the multiplicand and multiplier matrices into memory once and then allow all of your worker threads to access the memory simultaneously. This should be safe because those matrices will not be changing during the multiplication.\nYou should also be able to allow all the worker threads to write their output simultaneously into the same output matrix because (due to the nature of matrix multiplication) each thread will end up writing its output to different elements of the matrix and there will not be any contention.\nI think you should distribute the rows between threads by maintaining an integer NextRowToProcess that is shared by all of the threads. Whenever a thread is ready to process another row it calls InterlockedIncrement (or whatever atomic increment operation you have available on your platform) to safely get the next row to process.","Q_Score":2,"Tags":"python,multithreading,matrix,distributed","A_Id":9731658,"CreationDate":"2012-03-16T03:36:00.000","Title":"Creating a thread for each operation or a some threads for various operations?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am training a svm classifier with cross validation (stratifiedKfold) using the scikits interfaces. For each test set (of k), I get a classification result. I want to have a confusion matrix with all the results.\nScikits has a confusion matrix interface:\n sklearn.metrics.confusion_matrix(y_true, y_pred)\nMy question is how should I accumulate the y_true and y_pred values. They are arrays (numpy). Should I define the size of the arrays based on my k-fold parameter? And for each result I should add the y_true and y-pred to the array ????","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4075,"Q_Id":9734403,"Users Score":2,"Answer":"You can either use an aggregate confusion matrix or compute one for each CV partition and compute the mean and the standard deviation (or standard error) for each component in the matrix as a measure of the variability.\nFor the classification report, the code would need to be modified to accept 2 dimensional inputs so as to pass the predictions for each CV partitions and then compute the mean scores and std deviation for each class.","Q_Score":6,"Tags":"python,machine-learning,scikits,scikit-learn","A_Id":9760852,"CreationDate":"2012-03-16T09:05:00.000","Title":"scikits confusion matrix with cross validation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a user-app that takes input from the user as the current open wikipedia page. I have written a piece of code that takes this as input to my module and generates a list of keywords related to that particular article using webscraping and natural language processing.\nI want to expand the functionality of the app by providing in addition to the keywords that i have identified, a set of related topics that may be of interest to the user. Is there any API that wikipedia provides that will do the trick. If there isn't, Can anybody Point me to what i should be looking into (incase i have to write code from scratch). Also i will appreciate any pointers in identifying any algorithm that will train the machine to identify topic maps. I am not seeking any paper but rather a practical implementation of something basic\nso to summarize, \n\nI need a way to find topics related to current article in wikipedia (categories will also do)\nI will also appreciate a sample algorithm for training a machine to identify topics that usually are related and clustered.\n\nps. please be specific because i have researched through a number of obvious possibilities\n appreciate it thank you","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1060,"Q_Id":9760636,"Users Score":0,"Answer":"You can scrape the categories if you want. If you're working with python, you can read the wikitext directly from their API, and use mwlib to parse the article and find the links.\nA more interesting but harder to implement approach would be to create clusters of related terms, and given the list of terms extracted from an article, find the closest terms to them.","Q_Score":1,"Tags":"python,keyword,wikipedia,topic-maps","A_Id":9760985,"CreationDate":"2012-03-18T17:46:00.000","Title":"How to get related topics from a present wikipedia article?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers.\nHow can I save the ctypes object and what the pointers are pointing to for later use?\nI tried \n\nscipy.io.savemat => TypeError: Could not convert object to array\ncPickle => ctypes objects containing pointers cannot be pickled","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":14249,"Q_Id":9768218,"Users Score":11,"Answer":"Python has no way of doing that automatically for you:\nYou will have to build code to pick all the desired Data yourself, putting them in a suitable Python data structure (or just adding the data in a unique bytes-string where you will know where each element is by its offset) - and then save that object to disk.\nThis is not a \"Python\" problem - it is exactly a problem Python solves for you when you use Python objects and data. When coding in C or lower level, you are responsible to know not only where your data is, but also, the length of each chunk of data (and allocate memory for each chunk, and free it when done, and etc). And this is what you have to do in this case.\nYour data structure should give you not only the pointers, but also the length of the data in each pointed location (in a way or the other - if the pointer is to another structure, \"size_of\" will work for you)","Q_Score":8,"Tags":"python,ctypes,pickle","A_Id":9771616,"CreationDate":"2012-03-19T10:04:00.000","Title":"How to save ctypes objects containing pointers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers.\nHow can I save the ctypes object and what the pointers are pointing to for later use?\nI tried \n\nscipy.io.savemat => TypeError: Could not convert object to array\ncPickle => ctypes objects containing pointers cannot be pickled","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":14249,"Q_Id":9768218,"Users Score":1,"Answer":"To pickle a ctypes object that has pointers, you would have to define your own __getstate__\/__reduce__ methods for pickling and __setstate__ for unpickling. More information in the docs for pickle module.","Q_Score":8,"Tags":"python,ctypes,pickle","A_Id":41899145,"CreationDate":"2012-03-19T10:04:00.000","Title":"How to save ctypes objects containing pointers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers.\nHow can I save the ctypes object and what the pointers are pointing to for later use?\nI tried \n\nscipy.io.savemat => TypeError: Could not convert object to array\ncPickle => ctypes objects containing pointers cannot be pickled","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":14249,"Q_Id":9768218,"Users Score":0,"Answer":"You could copy the data into a Python data structure and dereference the pointers as you go (using the contents attribute of a pointer).","Q_Score":8,"Tags":"python,ctypes,pickle","A_Id":9768597,"CreationDate":"2012-03-19T10:04:00.000","Title":"How to save ctypes objects containing pointers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm implementing a Matrix Product State class, which is some kind of special tensor decomposition scheme in python\/numpy for fast algorithm prototyping.\nI don't think that there already is such a thing out there, and I want to do it myself to get a proper understanding of the scheme.\nWhat I want to have is that, if I store a given tensor T in this format as T_mps, I can access the reconstructed elements by T_mps[ [i0, i1, ..., iL] ]. This is achieved by the getitem(self, key) method and works fine.\nNow I want to use numpy.allclose(T, mps_T) to see if my decomposition is correct.\nBut when I do this I get a type error for my own type:\n\nTypeError: function not supported for these types, and can't coerce safely to supported types\n\nI looked at the documentation of allclose and there it is said, that the function works for \"array like\" objects. Now, what is this \"array like\" concept and where can I find its specification ?\nMaybe I'm better off, implementing my own allclose method ? But that would somewhat be reinventing the wheel, wouldn't it ?\nAppreciate any help\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":416,"Q_Id":9801235,"Users Score":3,"Answer":"The term \"arraylike\" is used in the numpy documentation to mean \"anything that can be passed to numpy.asarray() such that it returns an appropriate numpy.ndarray.\" Most sequences with proper __len__() and __getitem__() methods work okay. Note that the __getitem__(i) must be able to accept a single integer index in range(len(self)), not just a list of indices as you seem to indicate. The result from this __getitem__(i) must either be an atomic value that numpy knows about, like a float or an int, or be another sequence as above. Without more details about your Matrix Product State implementation, that's about all I can help.","Q_Score":1,"Tags":"python,numpy,magic-methods,type-coercion","A_Id":9802282,"CreationDate":"2012-03-21T08:48:00.000","Title":"python\/numpy: Using own data structure with np.allclose() ? Where to look for the requirements \/ what are they?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm on OSX Snow Leopard and I run 2.7 in my scripts and the interpreter seems to be running 2.6\nBefore I was able to import numpy but then I would get an error when trying to import matplotlib so I went looking for a solution and updated my PYTHONPATH variable, but I think I did it incorrectly and have now simply screwed everything up.\nThis is what I get when I try and import numpy in my script:\n\nTraceback (most recent call last):\n File \".\/hh_main.py\", line 5, in \n import numpy\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site->packages\/numpy\/init.py\", line 137, in \n import add_newdocs\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site->packages\/numpy\/add_newdocs.py\", line 9, in \n from numpy.lib import add_newdoc\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site->packages\/numpy\/lib\/init.py\", line 4, in \n from type_check import *\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site->packages\/numpy\/lib\/type_check.py\", line 8, in \n import numpy.core.numeric as _nx\n File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site->packages\/numpy\/core\/init.py\", line 5, in \n import multiarray\n ImportError: dlopen(\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site->packages\/numpy\/core\/multiarray.so, 2): Symbol not found: _PyCapsule_Import\n Referenced from: \/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site->packages\/numpy\/core\/multiarray.so\n Expected in: flat namespace\n in \/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site->packages\/numpy\/core\/multiarray.so\n\nFurthermore this is what I get from sys.path in the interpreter: \n\n['', '\/Users\/joshuaschneier\/Documents\/python_files', '\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages', '\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python27.zip', '\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7', '\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/plat-darwin', '\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/plat-mac', '\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/plat-mac\/lib-scriptpackages', '\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/lib-tk', '\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/lib-old', '\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/lib-dynload']\n\nAnd this is my PYTHONPATH which I guess I updated wrong:\n\n:\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/\n\nThanks for any help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2566,"Q_Id":9817995,"Users Score":1,"Answer":"You'll generally need to install numpy, matplotlib etc once for every version of python you use, as it will install itself to the specific 'python2.x\/site-packages' directory. \nIs the above output generated from a 2.6 or 2.7 session? If it's a 2.6 session, then yes, pointing your PYTHONPATH at 2.7 won't work - numpy includes compiled C code (e.g. the multiarray.so file) which will have been built against a specific version of python. \nIf you don't fancy maintaining two sets of packages, I'd recommend installing numpy, matplotlib etc all for version 2.7, removing that PYTHONPATH setting, and making sure that both scripts and interpreter sessions use version 2.7. \nIf you want to keep both versions you'll just have to install each packages twice (and you'll probably still wnat to undo your PTYHONPATH change)","Q_Score":1,"Tags":"python,path,numpy,matplotlib","A_Id":9820025,"CreationDate":"2012-03-22T07:22:00.000","Title":"Python import works in interpreter, doesn't work in script Numpy\/Matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a vector\/array of n elements. I want to choose m elements.\nThe choices must be fair \/ deterministic -- equally many from each subsection.\nWith m=10, n=20 it is easy: just take every second element.\nBut how to do it in the general case? Do I have to calculate the LCD?","AnswerCount":5,"Available Count":1,"Score":0.1194272985,"is_accepted":false,"ViewCount":11243,"Q_Id":9873626,"Users Score":3,"Answer":"Use a loop (int i=0; i < m; i++)\nThen to get the indexes you want, Ceil(i*m\/n).","Q_Score":12,"Tags":"python,algorithm","A_Id":9873885,"CreationDate":"2012-03-26T14:05:00.000","Title":"Choose m evenly spaced elements from a sequence of length n","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to serialize a large (~10**6 rows, each with ~20 values) list, to be used later by myself (so pickle's lack of safety isn't a concern).\nEach row of the list is a tuple of values, derived from some SQL database. So far, I have seen datetime.datetime, strings, integers, and NoneType, but I might eventually have to support additional data types.\nFor serialization, I've considered pickle (cPickle), json, and plain text - but only pickle saves the type information: json can't serialize datetime.datetime, and plain text has its obvious disadvantages.\nHowever, cPickle is pretty slow for data this large, and I'm looking for a faster alternative.","AnswerCount":8,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":35941,"Q_Id":9897345,"Users Score":14,"Answer":"Pickle is actually quite fast so long as you aren't using the (default) ASCII protocol. Just make sure to dump using protocol=pickle.HIGHEST_PROTOCOL.","Q_Score":33,"Tags":"python,serialization","A_Id":12095050,"CreationDate":"2012-03-27T20:38:00.000","Title":"Pickle alternatives","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using pandas to read a csv file. The data are numbers but stored in the csv file as text. Some of the values are non-numeric when they are bad or missing. How do I filter out these values and convert the remaining data to integers. \nI assume there is a better\/faster way than looping over all the values and using isdigit() to test for them being numeric. \nDoes pandas or numpy have a way of just recognizing bad values in the reader? If not, what is the easiest way to do it? Do I have to specific the dtypes to make this work?","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3639,"Q_Id":9927711,"Users Score":3,"Answer":"You can pass a custom list of values to be treated as missing using pandas.read_csv . Alternately you can pass functions to the converters argument.","Q_Score":1,"Tags":"python,numpy,pandas","A_Id":9927957,"CreationDate":"2012-03-29T14:42:00.000","Title":"Reading csv in python pandas and handling bad values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed Numpy using ActivePython and when I try to import numpy module, it is throwing the following error:\n\nImportError:\n \/opt\/ActivePython-2.7\/lib\/python2.7\/site-packages\/numpy\/core\/multiarray.so:\n undefined symbol: PyUnicodeUCS2_FromUnicode\n\nI am fairly new to python, and I am not sure what to do. I appreciate if you could point me to the right direction. \n\nShould I remove python and configure its compilation with the\n\"--enable-unicode=ucs2\" or \"--with-wide-unicode\" option?\n\nCheers\n\n\nOS: Fedora 16, 64bit; \nPython version: Python 2.7.2 (default, Mar 26 2012, 10:29:24);\nThe current compile Unicode version: ucs4","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1398,"Q_Id":9929170,"Users Score":0,"Answer":"I suggest that a quick solution to these sort of complications is that you use the Enthought Python Distribpotion (EPD) on Linux which includes a wide range of extensions. Cheers.","Q_Score":0,"Tags":"linux,unicode,numpy,python-2.7,ucs","A_Id":9993744,"CreationDate":"2012-03-29T16:05:00.000","Title":"Numpy needs the ucs2","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to implement a broker using zeromq PUB\/SUB(python eventlets). zeromq 2.1 does not seem to implement filtering at publisher and all messages are broadcasted to all subscribers which inturn apply filter. Is there some kind of workaround to achieve filtering at publisher. If not how bad is the performance if there are ~25 publishers and 25 subscribers exchanging msgs @ max rate of 200 msgs per second where msg_size ~= 5K through the broker. \nAre there any opensource well-tested zero-mq broker implementations.??","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":4347,"Q_Id":9939238,"Users Score":2,"Answer":"From the \u00d8MQ guide:\n\nFrom ZeroMQ v3.x, filtering happens at the publisher side when using a connected protocol (tcp:\/\/ or ipc:\/\/). Using the epgm:\/\/ protocol, filtering happens at the subscriber side. In ZeroMQ v2.x, all filtering happened at the subscriber side.","Q_Score":2,"Tags":"python,zeromq","A_Id":28609198,"CreationDate":"2012-03-30T08:13:00.000","Title":"ZeroMQ PUB\/SUB filtering and performance","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a binary image with contour lines and need to purify each contour line of all unnecessary pixels, leaving behind a minimally connected line.\nCan somebody give me a source, code example or further information for this kind of problem and where to search for help, please?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":6610,"Q_Id":9980270,"Users Score":1,"Answer":"A combination of erosion and dilation (and vice versa) on a binary image can help to get rid of salt n pepper like noise leaving small lines intact. Keywords are 'rank order filters' and 'morphological filters'.","Q_Score":0,"Tags":"python,image-processing,binary","A_Id":9982670,"CreationDate":"2012-04-02T16:37:00.000","Title":"Thinning contour lines in a binary image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making a small matrix operations library as a programming challenge to myself(and for the purpose of learning to code with Python), and I've come upon the task of calculating the determinant of 2x2, 3x3 and 4x4 matrices.\nAs far as my understanding of linear algebra goes, I need to implement the Rule of Sarrus in order to do the first 2, but I don't know how to tackle this Pythonically or for matrices of larger size. Any hints, tips or guides would be much appreciated.","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":3244,"Q_Id":10003232,"Users Score":6,"Answer":"The rule of Sarrus is only a mnemonic for solving 3x3 determinants, and won't be as helpful moving beyond that size.\nYou should investigate the Leibniz formula for calculating the determinant of an arbitrarily large square matrix. The nice thing about this formula is that the determinant of an n*n matrix is that it can be determined in terms of a combination of the determinants of some of its (n-1)*(n-1) sub-matrices, which lends itself nicely to a recursive function solution.\nIf you can understand the algorithm behind the Leibniz formula, and you have worked with recursive functions before, it will be straightforward to translate this in to code (Python, or otherwise), and then you can find the determinant of 4x4 matrices and beyond!","Q_Score":3,"Tags":"python,matrix,linear-algebra","A_Id":10003296,"CreationDate":"2012-04-04T00:08:00.000","Title":"Python determinant calculation(without the use of external libraries)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have hundreds of files containing text I want to use with NLTK. Here is one such file:\n\n\u09ac\u09c7,\u09ac\u099a\u09be \u0987\u09af\u09bc\u09be\u09a3\u09cd\u09a0\u09be,\u09b0\u09cd\u099a\u09be \u09a2\u09be\u09b0\u09cd\u09ac\u09bf\u09a4 \u09a4\u09cb\u0996\u09be\u099f\u09b9 \u09a8\u09a4\u09c1\u09a8, \u0985 \u09aa\u09cd\u09b0\u09ac\u0983\u09be\u09b6\u09bf\u09a4\u0964\n\u09a4\u09ac\u09c7 ' \u098f \u09ac\u0982 \u09ae\u09c1\u09b6\u09be\u09af\u09bc\u09c7\u09b0\u09be ' \u09aa\u09a4\u09cd\u09b0\u09bf\u09ac\u09cd\u09af\u09be\u09af\u09bc \u09aa\u09cd\u09b0\u0995\u09be\u09b6\u09bf\u09a4 \u09a4\u09bf\u09a8\u099f\u09bf \u09b2\u09c7\u0996\u09be\u0987 \u09ac\u0987\u09af\u09c7\n\u09b8\u0982\u09ac\u09cd\u09af\u099c\u09be\u09a8 \u09ac\u09cd\u09af\u09b0\u09be\u09b0 \u099c\u09a8\u09be \u09ac\u09bf\u09b6\u09c7\u09b7\u09ad\u09be\u09ac\u09c7 \u09aa\u09b0\u09bf\u09ac\u09b0\u09cd\u09a7\u09bf\u09a4\u0964 \u09aa\u09be\u099a \u09a6\u09be\u09aa\u09a8\u09bf\u0995\u09c7\u09ac\n\u09a1:\u09ac\u09a8 \u09a8\u09bf\u09af\u09bc\u09c7 \u098f\u0987 \u09ac\u0987 \u09a4\u09c8\u09b0\u09bf \u09ac\u09be\u09ac\u09be\u09b0 \u09aa\u09b0\u09bf\u09ac\u09cd\u09af\u09b2\u09cd\u09aa\u09a8\u09be\u0993 \u09ae\u09cd\u09ad\u09cd\u09b0\u09be\u09b8\u09c1\u09a8\u09a4\u09a8\n\u09b8\u09be\u09ae\u09a8\u09cd\u09a4\u09c7\u09b0\u0987\u0964 \u09a4\u09be\u09b0 \u0986\u09b0 \u09a4\u09be\u09b0 \u09b8\u09b9\u0995\u09be\u09b0\u09c0\u09a6\u09c7\u09ac \u09a8\u09bf\u09b7\u09cd\u09a0\u09be \u099b\u09be\u09a1\u09be \u0985\u09b2\u09cd\u09aa \u09b8\u09ae\u09af\u09bc\u09c7\n\u098f\u0987 \u09ac\u0987 \u09aa\u09cd\u09b0\u09ac\u09cd\u09af\u09be\u09b6\u09bf\u09a4 \u09b9\u09a4\u09c7 \u09aa\u09be\u09b0\u09a4 \u09a8\u09be\u0964,\u09a4\u09be\u0981\u09a6\u09c7\u09b0 \u09b8\u0995\u09b2\u0995\u09c7 \u0986\u09ae\u09be\u09a7\n\u09a8\u09ae\u09b8\u09cd\u0995\u09be\u09b0 \u099c\u09be\u09a8\u09be\u0987\u0964\n\u09ac\u09a4\u09be\u09ac\u09cd\u09af\u09be\u09a4\u09be \u09b6\u09cd\u09b0\u09be\u09ac\u09a8\u09cd\u09a4\u09be \u099c\u09cd\u099c\u09be\u09a3\u09cd\u09a3\u09bf\u0995\n\u099c\u09be\u09a8\u09c1\u09af\u09bc\u09be\u09b0\u09bf \u09e8 \u09a3\u09cd\u099f \u09a3\u09cd\u099f \u09ee \nTotal characters: 378\n\nNote that each line does not contain a new sentence. Rather, the sentence terminator - the equivalent of the period in English - is the '\u0964' symbol.\nCould someone please help me create my corpus? If imported into a variable MyData, I would need to access MyData.words() and MyData.sents(). Also, the last line should not appear in the corpus (it merely contains a character count).\nPlease note that I will need to run operations on data from all the files at once.\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":272,"Q_Id":10006467,"Users Score":1,"Answer":"You don't need to input the files yourself or to provide words and sents methods. \nRead in your corpus with PlaintextCorpusReader, and it will provide those for you.\nThe corpus reader constructor accepts arguments for the path and filename pattern of the files, and for the input encoding (be sure to specify it). \nThe constructor also has optional arguments for the sentence and word tokenization functions, so you can pass it your own method to break up the text into sentences. If word and sentence detection is really simple, i.e., if the | character has other uses, you can configure a tokenization function from the nltk's RegexpTokenizer family, or you can write your own from scratch. (Before you write your own, study the docs and code or write a stub to find out what kind of input it's called with.)\nIf recognizing sentence boundaries is non-trivial, you can later figure out how to train the nltk's PunktSentenceTokenizer, which uses an unsupervized statistical algorithm to learn which uses of the sentence terminator actually end a sentence.\nIf the configuration of your corpus reader is fairly complex, you may find it useful to create a class that specializes PlaintextCorpusReader. But much of the time that's not necessary. Take a look at the NLTK code to see how the gutenberg corpus is implemented: It's just a PlainTextCorpusReader instance with appropriate arguments for the constructor.","Q_Score":0,"Tags":"python,nlp,nltk","A_Id":10054525,"CreationDate":"2012-04-04T07:11:00.000","Title":"Creating a corpus from data in a custom format","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have the Enthought Python Distribution installed.\nBefore that I installed Python2.7 and installed other modules (e.g. opencv). \nEnthought establishes itself as the default python. \nCalled 7.2, but it is 2.7. \nNow if i want to import cv in the Enthought Python it always gives me the Segmentation fault Error.\nIs there anyway to import cv in the Enthought Python ?\nThat would be awesome.\nAlso installing any new module into Enthought, seems to have the same error. \nAny solution for that would be great. \nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4794,"Q_Id":10042429,"Users Score":2,"Answer":"Python only SEGFAULTs if \n\nThere is error in a native extension DLL code loaded\nVirtual machine has bugs (it has not)\n\nRun Python in -vvv mode to see more information about import issues.\nYou probably need to recompile the modules you need against the Python build you are using. Python major versions and architecture (32-bit vs. 64-bit) native extensions are not compatible between versions.\nAlso you can use gdb to extract a C stack trace needed to give the exact data where and why it crashes. \nThere are only tips what you should do; because the problem is specific to your configuration only and not repeatable people can only give you information how to further troubleshoot the issue. Because it is very likely that methods to troubleshoot issue given here might be too advanced, I just recommend reinstall everything.","Q_Score":2,"Tags":"python,segmentation-fault,enthought","A_Id":10043383,"CreationDate":"2012-04-06T10:46:00.000","Title":"Segmentation fault Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Whenever I execute a plt.show() in an Ipython console in spyderlib, the console freezes until I close the figure window. This only occurs in spyderlib and the blocking does occur when I run ipython --pylab or run ipython normally and call plt.ion() before plotting. I've tried using plt.draw(), but nothing happens with that command.\nplt.ion() works for ipython, but when I run the same command in spyder it seems to not plot anything altogether (plt.show() no longer works).\nEnviroment Details:\nPython 2.6.5, Qt 4.6.2, PyQt4 (API v2) 4.7.2 on Linux","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1329,"Q_Id":10069680,"Users Score":0,"Answer":"I was having a similar (I think) problem. Make sure your interpreter is set to execute in the current interpreter (default, should allow for interactive plotting). If it's set to execute in a new dedicated python interpreter make sure that interact with the python interpreter after execution is selected. This solved the problem for me.","Q_Score":3,"Tags":"python,matlab,matplotlib,ipython,spyder","A_Id":10724685,"CreationDate":"2012-04-09T06:20:00.000","Title":"Spyder plotting blocks console commands","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can we search content(text) within images using plone 4.1. I work on linux Suppose an image say a sample.jpg contains text like 'Happy Birthday', on using search 'Birthday' I should get the contents i.e sample.jpg","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":232,"Q_Id":10071609,"Users Score":0,"Answer":"Best is to use collective.DocumentViewer with various options to select from","Q_Score":0,"Tags":"python,plone","A_Id":20901570,"CreationDate":"2012-04-09T09:57:00.000","Title":"Can we search content(text) within images using plone 4.1?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The output of spearmanr (Spearman correlation) of X,Y gives me the following:\nCorrelation: 0.54542821980327882\nP-Value: 2.3569040685361066e-65\nwhere len(X)=len(Y)=800.\nMy questions are as follows:\n0) What is the confidence (alpha?) here ? \n1) If correlation coefficient > alpha, the hypothesis of the correlation being a coincidence is rejected, thus there is correlation. Is this true ?\nThanks in advance..","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1006,"Q_Id":10076222,"Users Score":4,"Answer":"It's up to you to choose the level of significance (alpha). To be coherent you shall choose it before running the test. The function will return you the lowest alpha you can choose for which you reject the null hypothesis (H0) [reject H0 when p-value < alpha or equivalently -p-value>-alpha].\nYou therefore know that the lowest value for which you reject the null hypothesis (H0) is p-value (2.3569040685361066e-65). Therefore being p-value incredibly small your null hypothesis is rejected for any relevant level of alpha (usually alpha = 0.05).","Q_Score":2,"Tags":"python,statistics,scipy,correlation","A_Id":10076295,"CreationDate":"2012-04-09T16:20:00.000","Title":"scipy: significance of the return values of spearmanr (correlation)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using linalg.eig(A) to get the eigenvalues and eigenvectors of a matrix. Is there an easy way to sort these eigenvalues (and associated vectors) in order?","AnswerCount":2,"Available Count":1,"Score":-0.2913126125,"is_accepted":false,"ViewCount":13966,"Q_Id":10083772,"Users Score":-3,"Answer":"np.linalg.eig will often return complex values. You may want to consider using np.sort_complex(eig_vals).","Q_Score":5,"Tags":"python,numpy","A_Id":39361043,"CreationDate":"2012-04-10T05:54:00.000","Title":"python numpy sort eigenvalues","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to initialize an array that has X two-dimensional elements. For example, if X = 3, I want it to be [[0,0], [0,0], [0,0]]. I know that [0]*3 gives [0, 0, 0], but how do I do this for two-dimensional elements?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1407,"Q_Id":10099619,"Users Score":0,"Answer":"Using the construct \n[[0,0]]*3 \nworks just fine and returns the following:\n[[0, 0], [0, 0], [0, 0]]","Q_Score":2,"Tags":"python,arrays","A_Id":10101163,"CreationDate":"2012-04-11T04:01:00.000","Title":"How do I initialize a one-dimensional array of two-dimensional elements in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to initialize an array that has X two-dimensional elements. For example, if X = 3, I want it to be [[0,0], [0,0], [0,0]]. I know that [0]*3 gives [0, 0, 0], but how do I do this for two-dimensional elements?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1407,"Q_Id":10099619,"Users Score":0,"Answer":"I believe that it's [[0,0],]*3","Q_Score":2,"Tags":"python,arrays","A_Id":10099628,"CreationDate":"2012-04-11T04:01:00.000","Title":"How do I initialize a one-dimensional array of two-dimensional elements in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a R data.frame containing longitude, latitude which spans over the entire USA map. When X number of entries are all within a small geographic region of say a few degrees longitude & a few degrees latitude, I want to be able to detect this and then have my program then return the coordinates for the geographic bounding box. Is there a Python or R CRAN package that already does this? If not, how would I go about ascertaining this information?","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":5580,"Q_Id":10108368,"Users Score":1,"Answer":"A few ideas:\n\nAd-hoc & approximate: The \"2-D histogram\". Create arbitrary \"rectangular\" bins, of the degree width of your choice, assign each bin an ID. Placing a point in a bin means \"associate the point with the ID of the bin\". Upon each add to a bin, ask the bin how many points it has. Downside: doesn't correctly \"see\" a cluster of points that stradle a bin boundary; and: bins of \"constant longitudinal width\" actually are (spatially) smaller as you move north.\nUse the \"Shapely\" library for Python. Follow it's stock example for \"buffering points\", and do a cascaded union of the buffers. Look for globs over a certain area, or that \"contain\" a certain number of original points. Note that Shapely is not intrinsically \"geo-savy\", so you'll have to add corrections if you need them.\nUse a true DB with spatial processing. MySQL, Oracle, Postgres (with PostGIS), MSSQL all (I think) have \"Geometry\" and \"Geography\" datatypes, and you can do spatial queries on them (from your Python scripts). \n\nEach of these has different costs in dollars and time (in the learning curve)... and different degrees of geospatial accuracy. You have to pick what suits your budget and\/or requirements.","Q_Score":20,"Tags":"python,r,geolocation,cran","A_Id":10108983,"CreationDate":"2012-04-11T14:47:00.000","Title":"Detecting geographic clusters","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Latent Dirichlet Allocation with a corpus of news data from six different sources. I am interested in topic evolution, emergence, and want to compare how the sources are alike and different from each other over time. I know that there are a number of modified LDA algorithms such as the Author-Topic model, Topics Over Time, and so on.\nMy issue is that very few of these alternate model specifications are implemented in any standard format. A few are available in Java, but most exist as conference papers only. What is the best way to go about implementing some of these algorithms on my own? I am fairly proficient in R and jags, and can stumble around in Python when given long enough. I am willing to write the code, but I don't really know where to start and I don't know C or Java. Can I build a model in JAGS or Python just having the formulas from the manuscript? If so, can someone point me at an example of doing this? Thanks.","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2734,"Q_Id":10112500,"Users Score":4,"Answer":"My friend's response is below, pardon the language please.\n\nFirst I wrote up a Python implementation of the collapsed Gibbs sampler seen here (http:\/\/www.pnas.org\/content\/101\/suppl.1\/5228.full.pdf+html) and fleshed out here (http:\/\/cxwangyi.files.wordpress.com\/2012\/01\/llt.pdf). This was slow as balls.\nThen I used a Python wrapping of a C implementation of this paper (http:\/\/books.nips.cc\/papers\/files\/nips19\/NIPS2006_0511.pdf). Which is fast as f*ck, but the results are not as great as one would see with NMF. \nBut NMF implementations I've seen, with scitkits, and even with the scipy sparse-compatible recently released NIMFA library, they all blow the f*ck up on any sizable corpus. My new white whale is a sliced, distributed implementation of the thing. This'll be non-trivial.","Q_Score":7,"Tags":"python,r,nlp,text-mining,lda","A_Id":10177394,"CreationDate":"2012-04-11T19:20:00.000","Title":"Implementing alternative forms of LDA","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Okay, so I probably shouldn't be worrying about this anyway, but I've got some code that is meant to pass a (possibly very long, possibly very short) list of possibilities through a set of filters and maps and other things, and I want to know if my implementation will perform well.\nAs an example of the type of thing I want to do, consider this chain of operations:\n\nget all numbers from 1 to 100\nkeep only the even ones\nsquare each number\ngenerate all pairs [i, j] with i in the list above and j in [1, 2, 3, 4,5]\nkeep only the pairs where i + j > 40\n\nNow, after doing all this nonsense, I want to look through this set of pairs [i, j] for a pair which satisfies a certain condition. Usually, the solution is one of the first entries, in which case I don't even look at any of the others. Sometimes, however, I have to consume the entire list, and I don't find the answer and have to throw an error.\nI want to implement my \"chain of operations\" as a sequence of generators, i.e., each operation iterates through the items generated by the previous generator and \"yields\" its own output item by item (a la SICP streams). That way, if I never look at the last 300 entries of the output, they don't even get processed. I known that itertools provides things like imap and ifilter for doing many of the types of operations I would want to perform. \nMy question is: will a series of nested generators be a major performance hit in the cases where I do have to iterate through all possibilities?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1253,"Q_Id":10143637,"Users Score":1,"Answer":"\"Nested\" iterators amount to the composition of the functions that the iterators implement, so in general they pose no particularly novel performance considerations. \nNote that because generators are lazy, they also tend to cut down on memory allocation as compared with repeatedly allocating one sequence to transform into another.","Q_Score":1,"Tags":"python,generator","A_Id":10144447,"CreationDate":"2012-04-13T15:19:00.000","Title":"How fast are nested python generators?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm just starting out with Hadoop and writing some Map Reduce jobs. I was looking for help on writing a MR job in python that allows me to take some emails and put them into HDFS so I can search on the text or attachments of the email?\nThank you!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":251,"Q_Id":10144325,"Users Score":1,"Answer":"Yea, you need to use hadoop streaming if you want to use write Python code for running MapReduce Jobs","Q_Score":2,"Tags":"python,map,hadoop,mapreduce,reduce","A_Id":10144991,"CreationDate":"2012-04-13T16:03:00.000","Title":"Emails and Map Reduce Job","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written code in python to implement DBSCAN clustering algorithm.\nMy dataset consists of 14k users with each user represented by 10 features.\nI am unable to decide what exactly to keep as the value of Min_samples and epsilon as input\nHow should I decide that?\nSimilarity measure is euclidean distance.(Hence it becomes even more tough to decide.) Any pointers?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2333,"Q_Id":10155542,"Users Score":0,"Answer":"DBSCAN is pretty often hard to estimate its parameters.\nDid you think about the OPTICS algorithm? You only need in this case Min_samples which would correspond to the minimal cluster size.\nOtherwise for DBSCAN I've done it in the past by trial and error : try some values and see what happens. A general rule to follow is that if your dataset is noisy, you should have a larger value, and it is also correlated with the number of dimensions (10 in this case).","Q_Score":0,"Tags":"python,cluster-analysis,dbscan","A_Id":10155633,"CreationDate":"2012-04-14T17:04:00.000","Title":"Deciding input values to DBSCAN algorithm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had 0.71 pandas before today. I tried to update and I simply ran the .exe file supplied by the website.\nnow I tried \" import pandas\" but then it gives me an error\nImportError: C extensions not built: if you installed already verify that you are not importing from the source directory.\nI am new to python and pandas in general. Anything will help. \nthanks,","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2351,"Q_Id":10158613,"Users Score":0,"Answer":"I had the same error. I did not build pandas myself so i thought i should not get this error as mentioned on the pandas site. So i was confused on how to resolved this error.\nThe pandas site says that matplotlib is an optional depenedency so i didn't install it initially. But interestingly, after installing matplotlib the error disappeared. I am not sure what effect it had. \nit found something!","Q_Score":4,"Tags":"python,pandas","A_Id":12068757,"CreationDate":"2012-04-15T00:37:00.000","Title":"Importing Confusion Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had 0.71 pandas before today. I tried to update and I simply ran the .exe file supplied by the website.\nnow I tried \" import pandas\" but then it gives me an error\nImportError: C extensions not built: if you installed already verify that you are not importing from the source directory.\nI am new to python and pandas in general. Anything will help. \nthanks,","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":2351,"Q_Id":10158613,"Users Score":1,"Answer":"Had the same issue. Resolved by checking dependencies - make sure you have numpy > 1.6.1 and python-dateutil > 1.5 installed.","Q_Score":4,"Tags":"python,pandas","A_Id":11630790,"CreationDate":"2012-04-15T00:37:00.000","Title":"Importing Confusion Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to know if somebody knows a way to customize the csv output in htsql, and especially the delimiter and the encoding ?\nI would like to avoid iterating over each result and find a way through configuration and\/or extensions.\nThank in advance.\nAnthony","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":170,"Q_Id":10205990,"Users Score":3,"Answer":"If you want TAB as a delimiter, use tsv format (e.g. \/query\/:tsv instead of \/query\/:csv).\nThere is no way to specify the encoding other than UTF-8. You can reencode the output manually on the client.","Q_Score":1,"Tags":"python,sql,htsql","A_Id":10210348,"CreationDate":"2012-04-18T08:52:00.000","Title":"Customizing csv output in htsql","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it efficient to calculate many results in parallel with multiprocessing.Pool.map() in a situation where each input value is large (say 500\u00a0MB), but where input values general contain the same large object? I am afraid that the way multiprocessing works is by sending a pickled version of each input value to each worker process in the pool. If no optimization is performed, this would mean sending a lot of data for each input value in map(). Is this the case? I quickly had a look at the multiprocessing code but did not find anything obvious.\nMore generally, what simple parallelization strategy would you recommend so as to do a map() on say 10,000 values, each of them being a tuple (vector, very_large_matrix), where the vectors are always different, but where there are say only 5 different very large matrices?\nPS: the big input matrices actually appear \"progressively\": 2,000 vectors are first sent along with the first matrix, then 2,000 vectors are sent with the second matrix, etc.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2008,"Q_Id":10242525,"Users Score":1,"Answer":"I hit a similar issue: parallelizing calculations on a big dataset. As you mentioned multiprocessing.Pool.map pickles the arguments. What I did was to implement my own fork() wrapper that only pickles the return values back to the parent process, hence avoiding pickling the arguments. And a parallel map() on top of the wrapper.","Q_Score":2,"Tags":"python,dictionary,parallel-processing,multiprocessing,large-data","A_Id":10248052,"CreationDate":"2012-04-20T08:10:00.000","Title":"When to and when not to use map() with multiprocessing.Pool, in Python? case of big input values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm getting a memory error trying to do KernelPCA on a data set of 30.000 texts. RandomizedPCA works alright. I think what's happening is that RandomizedPCA works with sparse arrays and KernelPCA don't. \nDoes anyone have a list of learning methods that are currently implemented with sparse array support in scikits-learn?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":691,"Q_Id":10304280,"Users Score":1,"Answer":"We don't have that yet. You have to read the docstrings of the individual classes for now.\nAnyway, non linear models do not tend to work better than linear model for high dim sparse data such as text documents (and they can overfit more easily).","Q_Score":2,"Tags":"python,machine-learning,scikits,scikit-learn","A_Id":10308680,"CreationDate":"2012-04-24T18:56:00.000","Title":"python, scikits-learn: which learning methods support sparse feature vectors?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"do you know a quick\/elegant Python\/Scipy\/Numpy solution for the following problem:\nYou have a set of x, y coordinates with associated values w (all 1D arrays). Now bin x and y onto a 2D grid (size BINSxBINS) and calculate quantiles (like the median) of the w values for each bin, which should at the end result in a BINSxBINS 2D array with the required quantiles.\nThis is easy to do with some nested loop,but I am sure there is a more elegant solution.\nThanks,\nMark","AnswerCount":4,"Available Count":1,"Score":0.1488850336,"is_accepted":false,"ViewCount":5849,"Q_Id":10305964,"Users Score":3,"Answer":"I'm just trying to do this myself and it sound like you want the command \"scipy.stats.binned_statistic_2d\" from you can find the mean, median, standard devation or any defined function for the third parameter given the bins.\nI realise this question has already been answered but I believe this is a good built in solution.","Q_Score":7,"Tags":"python,numpy,statistics,scipy","A_Id":27016762,"CreationDate":"2012-04-24T21:04:00.000","Title":"Quantile\/Median\/2D binning in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have implemented Tensor Factorization Algorithm in Matlab. But, actually, I need to use it in Web Application. \nSo I implemented web site on Django framework, now I need to merge it with my Tensor Factorization algorithm. \nFor those who are not familiar with tensor factorization, you can think there are bunch of multiplication, addition and division on large matrices of size, for example 10 000 x 8 000. In tensor factorization case we do not have matrices, instead we have 3-dimensional(for my purpose) arrays.\nBy the way, I m using MySQL as my database. \nI am considering to implement this algorithm in Python or in C++. But I can't be sure which one is better. \nDo you have any idea about efficiency of Python and C++ when processing on huge data set? Which one is better? Why?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":291,"Q_Id":10307173,"Users Score":0,"Answer":"Python is just fine. I am a Python person. I do not know C++ personally. However, during my research of python the creator of mathematica stated himself that python is equally as powerful as mathematica. Python is used in many highly accurate calculations. (i. e. engineering software, architecture work, etc. . .)","Q_Score":1,"Tags":"python,c++,mysql,django,large-data","A_Id":10327841,"CreationDate":"2012-04-24T22:58:00.000","Title":"Most efficient language to implement tensor factorization for Web Application","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As there are multitude of ways to write binary modules for python, i was hopping those of you with experience could advice on the best approach if i wish to improve the performance of some segments of the code as much as possible.\nAs i understand, one can either write an extension using the python\/numpy C-api, or wrap some already written pure C\/C++\/Fortran function to be called from the python code.\nNaturally, tools like Cython are the easiest way to go, but i assume that writing the code by hand gives better control and provide better performance.\nThe question, and it may be to general, is which approach to use. Write a C or C++ extension? wrap external C\/C++ functions or use callback to python functions?\nI write this question after reading chapter 10 in Langtangen's \"Python scripting for computational science\" where there is a comparison of several methods to interface between python and C.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1000,"Q_Id":10351450,"Users Score":3,"Answer":"I would say it depends on your skills\/experience and your project.\nIf this is very ponctual and you are profficient in C\/C++ and you have already written python wrapper, then write your own extension and interface it.\nIf you are going to work with Numpy on other project, then go for the Numpy C-API, it's extensive and rather well documented but it is also quite a lot of documentation to process.\nAt least I had a lot of difficulty processing it, but then again I suck at C.\nIf you're not really sure go Cython, far less time consuming and the performance are in most cases very good. (my choice)\nFrom my point of view you need to be a good C coder to do better than Cython with the 2 previous implementation, and it will be much more complexe and time consuming.\nSo are you a great C coder ?\nAlso it might be worth your while to look into pycuda or some other GPGPU stuff if you're looking for performance, depending on your hardware of course.","Q_Score":0,"Tags":"python,c,numpy","A_Id":10352335,"CreationDate":"2012-04-27T13:23:00.000","Title":"best way to extend python \/ numpy performancewise","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have used import numpy as np in my program and when I try to execute np.zeroes to create a numpy array then it does not recognize the module zeroes in the program.\nThis happens when I execute in the subdirectory where the python program is.\nIf I copy it root folder and execute, then it shows the results.\nCan someone guide me as to why is this happening and what can I do to get the program executed in the subdirectory it self?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":73,"Q_Id":10395691,"Users Score":1,"Answer":"then it does not recognize the module zeroes in the program\n\nMake sure you don't have a file called numpy.py in your subdirectory. If you do, it would shadow the \"real\" numpy module and cause the symptoms you describe.","Q_Score":1,"Tags":"python,numpy","A_Id":10395730,"CreationDate":"2012-05-01T09:12:00.000","Title":"Numpy cannot be accessed in sub directories","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a 640x480 binary image (0s and 255s). There is a single white blob in the image (nearly circular) and I want to find the centroid of the blob (it's always convex). Essentially, what we're dealing with is a 2D boolean matrix. I'd like the runtime to be linear or better if possible - is this possible? \nTwo lines of thought so far:\n\nMake use of the numpy.where() function \nSum the values in each column and row, then find where the max value is based on those numbers... but is there a quick and efficient way to do this? This might just be a case of me being relatively new to python.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1130,"Q_Id":10409674,"Users Score":1,"Answer":"Depending on the size of your blob, I would say that dramatically reducing the resolution of your image may achieve what you want. \nReduce it to a 1\/10 resolution, find the one white pixel, and then you have a precise idea of where to search for the centroid.","Q_Score":4,"Tags":"python,image-processing,numpy,boolean,python-imaging-library","A_Id":10409722,"CreationDate":"2012-05-02T07:46:00.000","Title":"Finding a specific index in a binary image in linear time?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a 640x480 binary image (0s and 255s). There is a single white blob in the image (nearly circular) and I want to find the centroid of the blob (it's always convex). Essentially, what we're dealing with is a 2D boolean matrix. I'd like the runtime to be linear or better if possible - is this possible? \nTwo lines of thought so far:\n\nMake use of the numpy.where() function \nSum the values in each column and row, then find where the max value is based on those numbers... but is there a quick and efficient way to do this? This might just be a case of me being relatively new to python.","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1130,"Q_Id":10409674,"Users Score":2,"Answer":"The centroid's coordinates are arithmetic means of coordinates of the points.\nIf you want the linear solution, just go pixel by pixel, and count means of each coordinates, where the pixels are white, and that's the centroid.\nThere is probably no way you can make it better than linear in general case, however, if your circular object is much smaller than the image, you can speed it up, by searching for it first (sampling a number of random pixels, or a grid of pixels, if you know the blob is big enough) and then using BFS or DFS to find all the white points.","Q_Score":4,"Tags":"python,image-processing,numpy,boolean,python-imaging-library","A_Id":10409877,"CreationDate":"2012-05-02T07:46:00.000","Title":"Finding a specific index in a binary image in linear time?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a research project involving a microscope (with a camera connected to the view port; the video feed is streamed to an application we're developing) and a manipulator arm. The microscope and manipulator arm are both controlled by a Luigs & Neumann control box (very obsolete - the computer interfaces with it with a serial cable and its response time is slowwww.) The microscope can be moved in 3 dimensions; X, Y, and Z, whose axes are at right angles to one another. When the box is queried, it will return decimal values for the position of each axis of each device. Each device can be sent a command to move to a specific position, with sub-micrometer precision.\nThe manipulator arm, however, is adjustable in all 3 dimensions, and thus there is no guarantee that any of its axes are aligned at right angles. We need to be able to look at the video stream from the camera, and then click on a point on the screen where we want the tip of the manipulator arm to move to. Thus, the two coordinate systems have to be calibrated.\nRight now, we have achieved calibration by moving the microscope\/camera's position to the tip of the manipulator arm, setting that as the synchronization point between the two coordinate systems, and moving the manipulator arm +250um in the X direction, moving the microscope to the tip of the manipulator arm at this new position, and then using the differences between these values to define a 3d vector that corresponds to the distance and direction moved by the manipulator, per unit in the microscope coordinate system. This is repeated for each axis of the manipulator arm. \nOnce this data is obtained, in order to move the manipulator arm to a specific location in the microscope coordinate system, a system of equations can be solved by the program which determines how much it needs to move the manipulator in each axis to move it to the center point of the screen. This works pretty reliably so far.\nThe issue we're running into here is that due to the slow response time of the equipment, it can take 5-10 minutes to complete the calibration process, which is complicated by the fact that the tip of the manipulator arm must be changed occasionally during an experiment, requiring the calibration process to be repeated. Our research is rather time sensitive and this creates a major bottleneck in the process.\nMy linear algebra is a little patchy, but it seems like if we measure the units traveled by the tip of the manipulator arm per unit in the microscope coordinate system and have this just hard coded into the program (for now), it might be possible to move all 3 axes of the manipulator a specific amount at once, and then to derive the vectors for each axis from this information. I'm not really sure how to go about doing this (or if it's even possible to do this), and any advice would be greatly appreciated. If there's any additional information you need, or if you need clarification on anything please let me know.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":303,"Q_Id":10420966,"Users Score":0,"Answer":"You really need four data points to characterize three independent axes of movement.\nCan you can add some other constraints, ie are the manipulator axes orthogonal to each other, even if not fixed relative to the stage's axes? Do you know the manipulator's alignment roughly, even if not exactly?\nWhat takes the most time - moving the stage to re-center? Can you move the manipulator and stage at the same time? How wide is the microscope's field of view? How much distance-distortion is there near the edges of the view - does it actually have to be re-centered each time to be accurate? Maybe we could come up with a reverse-screen-distortion mapping instead?","Q_Score":0,"Tags":"python,linear-algebra,robotics,calibration","A_Id":10421975,"CreationDate":"2012-05-02T20:14:00.000","Title":"Manipulator\/camera calibration issue (linear algebra oriented)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute \"python myscript.py\" and it should only use 1 cpu.\nHowever, when I look at the results of the command \"top\", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so.\nInteresting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python.\nIf it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script.\n\nUPDATE:\nMy friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is.\nThere's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu.","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":1390,"Q_Id":10427900,"Users Score":2,"Answer":"Your code might be calling some functions that uses C\/C++\/etc. underneath. In that case, it is possible for multiple thread usage. \nAre you calling any libraries that are only python bindings to some more efficiently implemented functions?","Q_Score":1,"Tags":"python,multithreading,parallel-processing,cpu-usage","A_Id":10428163,"CreationDate":"2012-05-03T08:42:00.000","Title":"Stop Python from using more than one cpu","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute \"python myscript.py\" and it should only use 1 cpu.\nHowever, when I look at the results of the command \"top\", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so.\nInteresting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python.\nIf it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script.\n\nUPDATE:\nMy friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is.\nThere's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":1390,"Q_Id":10427900,"Users Score":1,"Answer":"You can always set your process affinity so it run on only one cpu. Use \"taskset\" command on linux, or process explorer on windows. \nThis way, you should be able to know if your script has same performance using one cpu or more.","Q_Score":1,"Tags":"python,multithreading,parallel-processing,cpu-usage","A_Id":10429302,"CreationDate":"2012-05-03T08:42:00.000","Title":"Stop Python from using more than one cpu","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute \"python myscript.py\" and it should only use 1 cpu.\nHowever, when I look at the results of the command \"top\", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so.\nInteresting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python.\nIf it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script.\n\nUPDATE:\nMy friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is.\nThere's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":1390,"Q_Id":10427900,"Users Score":1,"Answer":"Could it be that your code uses SciPy or other numeric library for Python that is linked against Intel MKL or another vendor provided library that uses OpenMP? If the underlying C\/C++ code is parallelised using OpenMP, you can limit it to a single thread by setting the environment variable OMP_NUM_THREADS to 1:\nOMP_NUM_THREADS=1 python myscript.py\nIntel MKL for sure is parallel in many places (LAPACK, BLAS and FFT functions) if linked with the corresponding parallel driver (the default link behaviour) and by default starts as many compute threads as is the number of available CPU cores.","Q_Score":1,"Tags":"python,multithreading,parallel-processing,cpu-usage","A_Id":10445816,"CreationDate":"2012-05-03T08:42:00.000","Title":"Stop Python from using more than one cpu","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My program plots a large number of lines (~200k) with matplotlib which is pretty greedy for memory. I usually have about 1.5G of free memory before plotting. When I show the figures, the system starts swapping heavily when there's still about 600-800M of free RAM. This behavior is not observed when, say, creating a huge numpy array, it just takes all the available memory instantaneously. It would be nice to figure out whether this is a matplotlib or system problem.\nI'm using 64-bit Arch Linux. \nUPD: The swapiness level is set to 10. Tried setting it to 0, as DoctororDrive suggested, but same thing. However, other programs seem to be ok with filling almost all the memory before the swap is used.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":199,"Q_Id":10481008,"Users Score":1,"Answer":"One thing to take into account for the huge numpy array is that you are not touching it. Memory is allocated lazily by default by the kernel. Try writing some values in that huge array and then check for swapping behaviour.","Q_Score":2,"Tags":"python,linux,matplotlib,archlinux","A_Id":10481152,"CreationDate":"2012-05-07T11:08:00.000","Title":"system swaps before the memory is full","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What are good libraries for creating a python program for (visually appealing) 3D physics simulations\/visualizations?\nI've looked at Vpython but the simulations I have seen look ugly, I want them to be visually appealing. It also looks like an old library. For 3D programming I've seen suggestions of using Panda3D and python-ogre but I'm not sure if it is really suited for exact simulations. Also, I would prefer a library that combines well with other libraries (E.g. pygame does not combine so well with other libraries).","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5810,"Q_Id":10489377,"Users Score":4,"Answer":"3D support for python is fairly weak compared to other languages, but with the way that most of them are built, the appearance of the program is far more mutable than you might think. For instance, you talked about Vpython, while many of their examples are not visually appealing, most of them are also from previous releases, the most recent release includes both extrusions, materials, and skins, which allow you to customize your appearance much moreso than before.\nIt is probably worth noting also, that it is simply not possible to make render-quality images in real time (cycles is a huge step in that direction, but it's still not quite there). I believe that most of your issue here is you are looking for something that technology is simply not capable of now, however if you are willing to take on the burden for making your simulation look visually appealing, Vpython (which is a gussied up version of PyOpenGL) is probably your best bet. Below is a run down of different technologies though, in case you are looking for anything more general:\nBlender: The most powerful python graphics program available, however it is made for graphic design and special effects, though it has very complex physics running underneath it, Blender is not made for physics simulations. Self contained.\nPanda3D: A program very often compared to Blender, however mostly useful for games. The game engine is nicer to work with than Blender's, but the render quality is far lower, as is the feature-richness. Self contained\nOgre: A library that was very popular for game development back in the day, with a lot of powerful functionality, especially for creating game environments. Event handling is also very well implemented. Can be made to integrate with other libraries, but with difficulty.\nVPython: A library intended for physics simulations that removes a lot of the texture mapping and rendering power compared to the other methods, however this capability is still there, as VPython is largely built from OpenGL, which is one of the most versatile graphics libraries around. As such, VPython also is very easy to integrate with other libraries.\nPyOpenGL: OpenGL for Python. OpenGL is one of the most widely use graphics libraries, and is without a doubt capable of producing some of the nicest visuals on this list (Except for Blender, which is a class of its own), however it will not be easy to do so. PyOpenGL is very bare bones, and while the functionality is there, it will be harder to implement than anything else. Plays very will with other libraries, but only if you know what you're doing.","Q_Score":6,"Tags":"python,3d,visualization,physics,simulation","A_Id":10859872,"CreationDate":"2012-05-07T21:26:00.000","Title":"What are good libraries for creating a python program for (visually appealing) 3D physics simulations\/visualizations?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing a project on document classification using naive bayes classifier in python. I have used the nltk python module for the same. The docs are from reuters dataset. I performed preprocessing steps such as stemming and stopword elimination and proceeded to compute tf-idf of the index terms. i used these values to train the classifier but the accuracy is very poor(53%). What should I do to improve the accuracy?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2839,"Q_Id":10515907,"Users Score":0,"Answer":"There could be many reasons for the classifier not working, and there are many ways to tweak it.\n\ndid you train it with enough positive and negative examples?\nhow did you train the classifier? did you give it every word as a feature, or did you also add more features for it to train on(like length of the text for example)?\nwhat exactly are you trying to classify? does the specified classification have specific words that are related to it?\n\nSo the question is rather broad. Maybe If you give more details You could get more relevant suggestions.","Q_Score":2,"Tags":"python,nltk,document-classification","A_Id":10516123,"CreationDate":"2012-05-09T12:17:00.000","Title":"document classification using naive bayes in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing a project on document classification using naive bayes classifier in python. I have used the nltk python module for the same. The docs are from reuters dataset. I performed preprocessing steps such as stemming and stopword elimination and proceeded to compute tf-idf of the index terms. i used these values to train the classifier but the accuracy is very poor(53%). What should I do to improve the accuracy?","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2839,"Q_Id":10515907,"Users Score":1,"Answer":"A few points that might help:\n\nDon't use a stoplist, it lowers accuracy (but do remove punctuation)\nLook at word features, and take only the top 1000 for example. Reducing dimensionality will improve your accuracy a lot;\nUse bigrams as well as unigrams - this will up the accuracy a bit.\n\nYou may also find alternative weighting techniques such as log(1 + TF) * log(IDF) will improve accuracy. Good luck!","Q_Score":2,"Tags":"python,nltk,document-classification","A_Id":21133966,"CreationDate":"2012-05-09T12:17:00.000","Title":"document classification using naive bayes in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to make a line graph with multiple sets of data on the same graph. But they all scale differently so will need individual y axis scales. \nWhat code will put each variable on a separate axis?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":150,"Q_Id":10534950,"Users Score":0,"Answer":"The generic answer is to write a method that allows you to scale the y value for each data set to lay on the graph the way you want it. Then all your data points will have a y value on the same scale, and you can label the Y axis based upon how you define your translation for each data set.","Q_Score":0,"Tags":"python,graph","A_Id":10535067,"CreationDate":"2012-05-10T13:24:00.000","Title":"How to have multiple y axis on a line graph in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create a pandas DataFrame and it works fine for a single file. If I need to build it for multiple files which have the same data structure. So instead of single file name I have a list of file names from which I would like to create the DataFrame.\nNot sure what's the way to append to current DataFrame in pandas or is there a way for pandas to suck a list of files into a DataFrame.","AnswerCount":6,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":24945,"Q_Id":10545957,"Users Score":1,"Answer":"I might try to concatenate the files before feeding them to pandas. If you're in Linux or Mac you could use cat, otherwise a very simple Python function could do the job for you.","Q_Score":18,"Tags":"python,pandas","A_Id":10546350,"CreationDate":"2012-05-11T05:36:00.000","Title":"creating pandas data frame from multiple files","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create a pandas DataFrame and it works fine for a single file. If I need to build it for multiple files which have the same data structure. So instead of single file name I have a list of file names from which I would like to create the DataFrame.\nNot sure what's the way to append to current DataFrame in pandas or is there a way for pandas to suck a list of files into a DataFrame.","AnswerCount":6,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":24945,"Q_Id":10545957,"Users Score":3,"Answer":"Potentially horribly inefficient but...\nWhy not use read_csv, to build two (or more) dataframes, then use join to put them together?\nThat said, it would be easier to answer your question if you provide some data or some of the code you've used thus far.","Q_Score":18,"Tags":"python,pandas","A_Id":10563786,"CreationDate":"2012-05-11T05:36:00.000","Title":"creating pandas data frame from multiple files","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to make a sorting system with card ranks and their values are obtained from a separate dictionary. In a simple deck of 52 cards, we have 2 to Ace ranks, in this case I want a ranking system where 0 is 10, J is 11, Q is 12, K is 13, A is 14 and 2 is 15 where 2 is the largest valued rank. The thing is, if there is a list where I want to sort rank cards in ASCENDING order according to the numbering system, how do I do so?\nFor example, here is a list, [3,5,9,7,J,K,2,0], I want to sort the list into [3,5,7,9,0,J,K,2]. I also made a dictionary for the numbering system as {'A': 14, 'K': 13, 'J': 11, 'Q': 12, '0': 10, '2': 15}.\nTHANKS","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":198,"Q_Id":10569853,"Users Score":0,"Answer":"have you already tried\n\nsorted(list_for_sorting, key=dictionary_you_wrote.__getitem__)\n\n?","Q_Score":3,"Tags":"python,list,sorting,dictionary","A_Id":10569942,"CreationDate":"2012-05-13T06:47:00.000","Title":"Sorting a list with elements containing dictionary values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to use Products.csvreplicata 1.1.7 with Products.PressRoom 3.18 to export PressContacts to csv in Plone 4.1? Or is there any other product to import\/export all the PressRoom contacts into csv.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":153,"Q_Id":10577866,"Users Score":1,"Answer":"Go to Site setup \/ CSV Replicata tool, and select PressRoom content(s) as exportable (and then select the schemata you want to be considered during import\/export).","Q_Score":1,"Tags":"python,plone","A_Id":10596347,"CreationDate":"2012-05-14T05:26:00.000","Title":"How to use Products.csvreplicata 1.1.7 with Products.PressRoom to export PressContacts in Plone 4.1","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am starting to learn python, I tried to generate random values by passing in a negative and positive number. Let say -1, 1. \nHow should I do this in python?","AnswerCount":10,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43543,"Q_Id":10579518,"Users Score":0,"Answer":"If you want to generate 2 random integers between 2 negative values than print(f\"{-random.randint(1, 5)}\") can also do the work.","Q_Score":15,"Tags":"python,random","A_Id":71463872,"CreationDate":"2012-05-14T08:04:00.000","Title":"How to generate negative random value in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I apologise if this question has already been asked.\nI'm really new to Python programming, and what I need to do is this:\nI have a .csv file in which each line represent a person and each column represents a variable.\nThis .csv file comes from an agent-based C++ simulation I have done.\nNow, I need to read each line of this file and for each line generate a new instance of the class Person(), passing as arguments every variable line by line.\nMy problem is this: what is the most pythonic way of generating these agents while keeping their unique ID (which is one of the attributes I want to read from the file)? Do you suggest creating a class dictionary for accessing every instance? But I still need to provide a name to every single instance, right? How can I do that dynamically? Probably the best thing would be to use the unique ID, read from the file, as the instance name, but I think that numbers can't be used as instance names, can they? I miss pointers! :(\nI am sure there is a pythonic solution I cannot see, as I still have to rewire my mind a bit to think in pythonic ways...\nThank you very much, any help would be greatly appreciated!\nAnd please remember that this is my first project in python, so go easy on me! ;)\nEDIT:\nThank you very much for your answers, but I still haven't got an answer on the main point: how to create an instance of my class Person() for every line in my csv file. I would like to do that automatically! Is it possible?\nWhy do I need this? Because I need to create networks of these people with networkx and I would like to have \"agents\" linked in a network structure, not just dictionary items.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":725,"Q_Id":10583195,"Users Score":1,"Answer":"I have a .csv file\n\nYou're in luck; CSV support is built right in, via the csv module.\n\nDo you suggest creating a class dictionary for accessing every instance?\n\nI don't know what you think you mean by \"class dictionary\". There are classes, and there are dictionaries.\n\nBut I still need to provide a name to every single instance, right? How can I do that dynamically? Probably the best thing would be to use the unique ID, read from the file, as the instance name, but I think that numbers can't be used as instance names, can they?\n\nNumbers can't be instance names, but they certainly can be dictionary keys.\nYou don't want to create \"instance names\" dynamically anyway (assuming you're thinking of having each in a separate variable or something gross like that). You want a dictionary. So just let the IDs be keys.\n\nI miss pointers! :(\n\nI really, honestly, can't imagine how you expect pointers to help here, and I have many years of experience with C++.","Q_Score":0,"Tags":"python,oop","A_Id":10583784,"CreationDate":"2012-05-14T12:19:00.000","Title":"How can I dynamically generate class instances with single attributes read from flat file in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to perform serialisation of some object graph in a modular way. That is I don't want to serialize the whole graph. The reason is that this graph is big. I can keep timestamped version of some part of the graph, and i can do some lazy access to postpone loading of the parts i don't need right now.\nI thought i could manage this with metaprogramming in Python. But it seems that metaprogramming is not strong enough in Python.\nHere's what i do for now. My graph is composed of several different objects. Some of them are instances of a special class. This class describes the root object to be pickled. This is where the modularity come in. Each time i pickle something it starts from one of those instances and i never pickle two of them at the same time. Whenever there is a reference to another instance, accessible by the root object, I replace this reference by a persistant_id, thus ensuring that i won't have two of them in the same pickling stream. The problem comes when unpickling the stream. I can found a persistant_id of an instance which is not loaded yet. When this is the case, i have to wait for the target instance to be loaded before allowing access to it. And i don't see anyway to do that :\n1\/ I tried to build an accessor which get methods return the target of the reference. Unfortunately, accessors must be placed in the class declaration, I can't assign them to the unpickled object.\n2\/ I could store somewhere the places where references have to be resolved. I don't think this is possible in Python : one can't keep reference to a place (a field, or a variable), it is only possible to keep a reference to a value.\nMy problem may not be clear. I'm still looking for a clear formulation. I tried other things like using explicit references which would be instances of some \"Reference\" class. It isn't very convenient though. \nDo you have any idea how to implement modular serialisation with pickle ? Would i have to change internal behaviour of Unpickler to be able to remember places where i need to load the remaining of the object graph ? Is there another library more suitable to achieve similar results ?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":289,"Q_Id":10607350,"Users Score":0,"Answer":"Metaprogramming is strong in Python; Python classes are extremely malleable. You can alter them after declaration all the way you want, though it's best done in a metaclass (decorator). More than that, instances are malleable, independently of their classes.\nA 'reference to a place' is often simply a string. E.g. a reference to object's field is its name. Assume you have multiple node references inside your node object. You could have something like {persistent_id: (object, field_name),..} as your unresolved references table, easy to look up. Similarly, in lists of nodes 'references to places' are indices.\nBTW, could you use a key-value database for graph storage? You'd be able to pull nodes by IDs without waiting.","Q_Score":1,"Tags":"python,pickle","A_Id":10608972,"CreationDate":"2012-05-15T19:16:00.000","Title":"Modular serialization with pickle (Python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to perform serialisation of some object graph in a modular way. That is I don't want to serialize the whole graph. The reason is that this graph is big. I can keep timestamped version of some part of the graph, and i can do some lazy access to postpone loading of the parts i don't need right now.\nI thought i could manage this with metaprogramming in Python. But it seems that metaprogramming is not strong enough in Python.\nHere's what i do for now. My graph is composed of several different objects. Some of them are instances of a special class. This class describes the root object to be pickled. This is where the modularity come in. Each time i pickle something it starts from one of those instances and i never pickle two of them at the same time. Whenever there is a reference to another instance, accessible by the root object, I replace this reference by a persistant_id, thus ensuring that i won't have two of them in the same pickling stream. The problem comes when unpickling the stream. I can found a persistant_id of an instance which is not loaded yet. When this is the case, i have to wait for the target instance to be loaded before allowing access to it. And i don't see anyway to do that :\n1\/ I tried to build an accessor which get methods return the target of the reference. Unfortunately, accessors must be placed in the class declaration, I can't assign them to the unpickled object.\n2\/ I could store somewhere the places where references have to be resolved. I don't think this is possible in Python : one can't keep reference to a place (a field, or a variable), it is only possible to keep a reference to a value.\nMy problem may not be clear. I'm still looking for a clear formulation. I tried other things like using explicit references which would be instances of some \"Reference\" class. It isn't very convenient though. \nDo you have any idea how to implement modular serialisation with pickle ? Would i have to change internal behaviour of Unpickler to be able to remember places where i need to load the remaining of the object graph ? Is there another library more suitable to achieve similar results ?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":289,"Q_Id":10607350,"Users Score":0,"Answer":"Here's how I think I would go about this.\n\nHave a module level dictionary mapping persistent_id to SpecialClass objects. Every time you initialise or unpickle a SpecialClass instance, make sure that it is added to the dictionary.\nOverride SpecialClass's __getattr__ and __setattr__ method, so that specialobj.foo = anotherspecialobj merely stores a persistent_id in a dictionary on specialobj (let's call it specialobj.specialrefs). When you retrieve specialobj.foo, it finds the name in specialrefs, then finds the reference in the module-level dictionary.\nHave a module level check_graph function which would go through the known SpecialClass instances and check that all of their specialrefs were available.","Q_Score":1,"Tags":"python,pickle","A_Id":10608783,"CreationDate":"2012-05-15T19:16:00.000","Title":"Modular serialization with pickle (Python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the Pandas package and it creates a DataFrame object, which is basically a labeled matrix. Often I have columns that have long string fields, or dataframes with many columns, so the simple print command doesn't work well. I've written some text output functions, but they aren't great.\nWhat I'd really love is a simple GUI that lets me interact with a dataframe \/ matrix \/ table. Just like you would find in a SQL tool. Basically a window that has a read-only spreadsheet like view into the data. I can expand columns, page up and down through long tables, etc.\nI would suspect something like this exists, but I must be Googling with the wrong terms. It would be great if it is pandas specific, but I would guess I could use any matrix-accepting tool. (BTW - I'm on Windows.)\nAny pointers?\nOr, conversely, if someone knows this space well and knows this probably doesn't exist, any suggestions on if there is a simple GUI framework \/ widget I could use to roll my own? (But since my needs are limited, I'm reluctant to have to learn a big GUI framework and do a bunch of coding for this one piece.)","AnswerCount":20,"Available Count":1,"Score":0.0099996667,"is_accepted":false,"ViewCount":100991,"Q_Id":10636024,"Users Score":1,"Answer":"I've also been searching very simple gui. I was surprised that no one mentioned gtabview.\nIt is easy to install (just pip3 install gtabview ), and it loads data blazingly fast.\nI recommend using gtabview if you are not using spyder or Pycharm.","Q_Score":70,"Tags":"python,user-interface,pandas,dataframe","A_Id":54660955,"CreationDate":"2012-05-17T12:48:00.000","Title":"Python \/ Pandas - GUI for viewing a DataFrame or Matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been using the svgwrite library to generate a sequence of svg images. I would like to turn this sequence of images into an animated svg. The support for animated svg in svgwrite seems to only work in the form of algorithmically moving objects in the drawing. Is it possible to use the time slices I have to generate an animated svg or am I stuck rasterizing them and creating a video from the images. Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1810,"Q_Id":10661381,"Users Score":4,"Answer":"The support for animated svg in svgwrite seems to only work in the form of algorithmically moving objects in the drawing.\n\nWell, yes. That's how SVG animation works; it takes the current objects in the image and applies transformations to them. If you want a \"movie\" then you will need to make a video from the images.","Q_Score":0,"Tags":"python,svg,slice,animated","A_Id":10661419,"CreationDate":"2012-05-19T00:38:00.000","Title":"Generating animated SVG with python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im trying to produce a usual matrix multiplication between two huge matrices (10*25,000,000).\nMy memory runs out when I do so. How could I use numpy's memmap to be able to handle this?\nIs this even a good idea? I'm not so worried about the speed of the operation, I just want the result even if it means waiting some time. Thank you in advanced!\n8 gbs ram, I7-2617M 1.5 1.5 ghz, Windows7 64 bits. Im using the 64 bit version of everything: python(2.7), numpy, scipy.\nEdit1:\nMaybe h5py is a better option?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1031,"Q_Id":10669270,"Users Score":2,"Answer":"you might try to use np.memmap, and compute the 10x10 output matrix one element at a time.\nso you just load the first row of the first matrix and the first column of the second, and then np.sum(row1 * col1).","Q_Score":3,"Tags":"python,numpy,memory-management,matrix-multiplication,large-data","A_Id":12110615,"CreationDate":"2012-05-19T22:24:00.000","Title":"Python numpy memmap matrix multiplication","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have successfully install NumPy on Ubuntu; however when inside a virtualenv, NumPy is not available. I must be missing something obvious, but I do not understand why I can not import NumPy when using python from a virtualenv. Can anyone help? I am using Python 2.7.3 as my system-wide python and inside my virtualenv. Thanks in advance for the help.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":394,"Q_Id":10688601,"Users Score":2,"Answer":"You have to install it inside of your virtual environment. The easiest way to do this is:\n\nsource [virtualenv]\/bin\/activate\npip install numpy","Q_Score":0,"Tags":"python,ubuntu,numpy,virtualenv","A_Id":10688691,"CreationDate":"2012-05-21T16:01:00.000","Title":"Installed NumPy successfully, but not accessible with virtualenv","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering if there is a way to use my GPU to speed up the training of a network in PyBrain.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2082,"Q_Id":10727140,"Users Score":2,"Answer":"Unless PyBrain is designed for that, you probably can't.\nYou might want to try running your trainer under PyPy if you aren't already -- it's significantly faster than CPython for some workloads. Perhaps this is one of those workloads. :)","Q_Score":1,"Tags":"python,neural-network,gpu,pybrain","A_Id":10727166,"CreationDate":"2012-05-23T20:12:00.000","Title":"How can I speed up the training of a network using my GPU?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a monotonically growing sequence of integers. For example \nseq=[(0, 0), (1, 5), (10, 20), (15, 24)]. \nAnd a integer value greater than the largest argument in the sequence (a > seq[-1][0]). I want to estimate value corresponding to the given value. The sequence grows nearly linearly, and earlier values are less important than later. Nevertheless I can't simply take 2 last points and calculate new value, because mistakes are very likely and the curve may change the angle.\nCan anyone suggest a simple solution for this kind of task in Python?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":254,"Q_Id":10768817,"Users Score":0,"Answer":"If the sequence does not have a lot of noise, just use the latest point, and the point for 1\/3 of the current, then estimate your line from that. Otherwise do something more complicated like a least squares fit for the latter half of the sequence.\nIf you search on Google, there are a number of code samples for doing the latter, and some modules that may help. (I'm not a Python programmer so I can't give a meaningful recommend for the best one.)","Q_Score":1,"Tags":"python,math,numpy,scipy","A_Id":10768872,"CreationDate":"2012-05-26T18:36:00.000","Title":"How to calculate estimation for monotonically growing sequence in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Good day,\nI'm attempting to write a sentimental analysis application in python (Using naive-bayes classifier) with the aim to categorize phrases from news as being positive or negative.\nAnd I'm having a bit of trouble finding an appropriate corpus for that.\nI tried using \"General Inquirer\" (http:\/\/www.wjh.harvard.edu\/~inquirer\/homecat.htm) which works OK but I have one big problem there.\nSince it is a word list, not a phrase list I observe the following problem when trying to label the following sentence:\n\nHe is not expected to win.\n\nThis sentence is categorized as being positive, which is wrong. The reason for that is that \"win\" is positive, but \"not\" does not carry any meaning since \"not win\" is a phrase.\nCan anyone suggest either a corpus or a work around for that issue?\nYour help and insight is greatly appriciated.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1434,"Q_Id":10789834,"Users Score":3,"Answer":"In this case, the work not modifies the meaning of the phrase expecteed to win, reversing it. To identify this, you would need to POS tag the sentence and apply the negative adverb not to the (I think) verb phrase as a negation. I don't know if there is a corpus that would tell you that not would be this type of modifier or not, however.","Q_Score":5,"Tags":"python,nlp,nltk","A_Id":10790083,"CreationDate":"2012-05-28T19:56:00.000","Title":"Phrase corpus for sentimental analysis","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently rewriting some python code to make it more efficient and I have a question about saving python arrays so that they can be re-used \/ manipulated later.\nI have a large number of data, saved in CSV files. Each file contains time-stamped values of the data that I am interested in and I have reached the point where I have to deal with tens of millions of data points. The data has got so large now that the processing time is excessive and inefficient---the way the current code is written the entire data set has to be reprocessed every time some new data is added. \nWhat I want to do is this:\n\nRead in all of the existing data to python arrays\nSave the variable arrays to some kind of database\/file\nThen, the next time more data is added I load my database, append the new data, and resave it. This way only a small number of data need to be processed at any one time.\nI would like the saved data to be accessible to further python scripts but also to be fairly \"human readable\" so that it can be handled in programs like OriginPro or perhaps even Excel. \n\nMy question is: whats the best format to save the data in? HDF5 seems like it might have all the features I need---but would something like SQLite make more sense? \nEDIT: My data is single dimensional. I essentially have 30 arrays which are (millions, 1) in size. If it wasn't for the fact that there are so many points then CSV would be an ideal format! I am unlikely to want to do lookups of single entries---more likely is that I might want to plot small subsets of data (eg the last 100 hours, or the last 1000 hours, etc).","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1123,"Q_Id":10800039,"Users Score":0,"Answer":"I would use a single file with fixed record length for this usecase. No specialised DB solution (seems overkill to me in that case), just plain old struct (see the documentation for struct.py) and read()\/write() on a file. If you have just millions of entries, everything should be working nicely in a single file of some dozens or hundreds of MB size (which is hardly too large for any file system). You also have random access to subsets in case you will need that later.","Q_Score":3,"Tags":"python,database,arrays,save,hdf5","A_Id":10802164,"CreationDate":"2012-05-29T13:20:00.000","Title":"Saving large Python arrays to disk for re-use later --- hdf5? Some other method?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently rewriting some python code to make it more efficient and I have a question about saving python arrays so that they can be re-used \/ manipulated later.\nI have a large number of data, saved in CSV files. Each file contains time-stamped values of the data that I am interested in and I have reached the point where I have to deal with tens of millions of data points. The data has got so large now that the processing time is excessive and inefficient---the way the current code is written the entire data set has to be reprocessed every time some new data is added. \nWhat I want to do is this:\n\nRead in all of the existing data to python arrays\nSave the variable arrays to some kind of database\/file\nThen, the next time more data is added I load my database, append the new data, and resave it. This way only a small number of data need to be processed at any one time.\nI would like the saved data to be accessible to further python scripts but also to be fairly \"human readable\" so that it can be handled in programs like OriginPro or perhaps even Excel. \n\nMy question is: whats the best format to save the data in? HDF5 seems like it might have all the features I need---but would something like SQLite make more sense? \nEDIT: My data is single dimensional. I essentially have 30 arrays which are (millions, 1) in size. If it wasn't for the fact that there are so many points then CSV would be an ideal format! I am unlikely to want to do lookups of single entries---more likely is that I might want to plot small subsets of data (eg the last 100 hours, or the last 1000 hours, etc).","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1123,"Q_Id":10800039,"Users Score":2,"Answer":"HDF5 is an excellent choice! It has a nice interface, is widely used (in the scientific community at least), many programs have support for it (matlab for example), there are libraries for C,C++,fortran,python,... It has a complete toolset to display the contents of a HDF5 file. If you later want to do complex MPI calculation on your data, HDF5 has support for concurrently read\/writes. It's very well suited to handle very large datasets.","Q_Score":3,"Tags":"python,database,arrays,save,hdf5","A_Id":10817026,"CreationDate":"2012-05-29T13:20:00.000","Title":"Saving large Python arrays to disk for re-use later --- hdf5? Some other method?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a telescope project and we are testing the CCD. Whenever we take pictures things are slightly pink-tinted and we need true color to correctly image galactic objects. I am planning on writing a small program in python or java to change the color weights but how can I access the weight of the color in a raw data file (it is rgb.bin)?\nWe are using a bayer matrix algorithm to convert monochromatic files to color files and I would imagine the problem is coming from there but I would like to fix it with a small color correcting program.\nThanks!","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1116,"Q_Id":10805356,"Users Score":3,"Answer":"Typical white-balance issues are caused by differing proportions of red, green, and blue in the makeup of the light illuminating a scene, or differences in the sensitivities of the sensors to those colors. These errors are generally linear, so you correct for them by multiplying by the inverse of the error.\nSuppose you measure a point you expect to be perfectly white, and its RGB values are (248,237,236) i.e. pink. If you multiply each pixel in the image by (248\/248,248\/237,248\/236) you will end up with the correct balance.\nYou should definitely ensure that your Bayer filter is producing the proper results first, or the base assumption of linear errors will be incorrect.","Q_Score":1,"Tags":"java,python,image-processing,rgb","A_Id":10807433,"CreationDate":"2012-05-29T19:22:00.000","Title":"Change color weight of raw image file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to port from labview to python. \nIn labview there is a function \"Integral x(t) VI\" that takes a set of samples as input, performs a discrete integration of the samples and returns a list of values (the areas under the curve) according to Simpsons rule.\nI tried to find an equivalent function in scipy, e.g. scipy.integrate.simps, but those functions return the summed integral across the set of samples, as a float.\nHow do I get the list of integrated values as opposed to the sum of the integrated values? \nAm I just looking at the problem the wrong way around?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":26770,"Q_Id":10814353,"Users Score":3,"Answer":"There is only one method in SciPy that does cumulative integration which is scipy.integrate.cumtrapz() which does what you want as long as you don't specifically need to use the Simpson rule or another method. For that, you can as suggested always write the loop on your own.","Q_Score":7,"Tags":"python,integration,scipy,integral","A_Id":39332002,"CreationDate":"2012-05-30T10:19:00.000","Title":"Using scipy to perform discrete integration of the sample","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written an algorithm that takes geospatial data and performs a number of steps. The input data are a shapefile of polygons and covariate rasters for a large raster study area (~150 million pixels). The steps are as follows:\n\nSample points from within polygons of the shapefile\nFor each sampling point, extract values from the covariate rasters\nBuild a predictive model on the sampling points\nExtract covariates for target grid points\nApply predictive model to target grid\nWrite predictions to a set of output grids\n\nThe whole process needs to be iterated a number of times (say 100) but each iteration currently takes more than an hour when processed in series. For each iteration, the most time-consuming parts are step 4 and 5. Because the target grid is so large, I've been processing it a block (say 1000 rows) at a time.\nI have a 6-core CPU with 32 Gb RAM, so within each iteration, I had a go at using Python's multiprocessing module with a Pool object to process a number of blocks simultaneously (steps 4 and 5) and then write the output (the predictions) to the common set of output grids using a callback function that calls a global output-writing function. This seems to work, but is no faster (actually, it's probably slower) than processing each block in series.\nSo my question is, is there a more efficient way to do it? I'm interested in the multiprocessing module's Queue class, but I'm not really sure how it works. For example, I'm wondering if it's more efficient to have a queue that carries out steps 4 and 5 then passes the results to another queue that carries out step 6. Or is this even what Queue is for?\nAny pointers would be appreciated.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":2043,"Q_Id":10843240,"Users Score":1,"Answer":"As python is not really meant to do intensive number-cunching, I typically start converting time-critical parts of a python program to C\/C++ and speed things up a lot.\nAlso, the python multithreading is not very good. Python keeps using a global semaphore for all kinds of things. So even when you use the Threads that python offers, things won't get faster. The threads are useful for applications, where threads will typically wait for things like IO.\nWhen making a C module, you can manually release the global semaphore when processing your data (then, of course, do not access the python values anymore).\nIt takes some practise using the C API, but's its clearly structured and much easier to use than, for example, the Java native API.\nSee 'extending and embedding' in the python documentation.\nThis way you can make the time critical parts in C\/C++, and the slower parts with faster programming work in python...","Q_Score":18,"Tags":"python,iteration,multiprocessing,gdal","A_Id":11115655,"CreationDate":"2012-06-01T01:12:00.000","Title":"Python multiprocessing design","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to write a python program for appending live stock quotes from a csv file to an excel file (which is already open) using xlrd and xlwt.\nThe task is summarised below.\nFrom my stock-broker's application, a csv file is continually being updated on my hard disk.\nI wish to write a program which, when run, would append the new data from csv file to an excel file, which is kept open (I wonder whether it is possible to read & write an open file).\nI wish to keep the file open because I will be having stock-charts in it.\nIs it possible? If yes, how?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1299,"Q_Id":10851726,"Users Score":1,"Answer":"Not directly. xlutils can use xlrd and xlwt to copy a spreadsheet, and appending to a \"to be written\" worksheet is straightforward. I don't think reading the open spreadsheet is a problem -- but xlwt will not write to the open book\/sheet.\nYou might write an Excel VBA macro to draw the graphs. In principle, I think a macro from a command workbook could close your stock workbook, invoke your python code to copy and update, open the new spreadsheet, and maybe run the macro to re-draw the graphs.\nAnother approach is to use matplotlib for the graphs. I'd think a sleep loop could wake up every n seconds, grab the new csv data, append it to your \"big\" csv data, and re-draw the graph. Taking this approach keeps you in python and should make things a lot easier, imho. Disclosure: my Python is better than my VBA.","Q_Score":1,"Tags":"python,xlrd,xlwt","A_Id":10857757,"CreationDate":"2012-06-01T14:01:00.000","Title":"xlrd - append data to already opened workbook","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm attempting to create my own corpus in NLTK. I've been reading some of the documentation on this and it seems rather complicated... all I wanted to do is \"clone\" the movie reviews corpus but with my own text. Now, I know I can just change files in the move reviews corpus to my own... but that limits me to working with just one such corpus at a time (ie. I'd have to continually be swapping files). is there any way i could just clone the movie reviews corpus?\nthanks\nAlex","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":301,"Q_Id":10874994,"Users Score":0,"Answer":"Why don't you a define a new corpus by copying the definition of movie_reviews in nltk.corpus? You can do this all you want with new directories, and then copy the directory structure and replace the files.","Q_Score":0,"Tags":"python,nlp,nltk,corpus","A_Id":10875787,"CreationDate":"2012-06-04T00:09:00.000","Title":"\"Cloning\" a corpus in NLTK?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an image that is created by using a bayer filter and the colors are slightly off. I need to multiply RG and B of each pixel by a certain factor ( a different factor for R, G and B each) to get the correct color. I am using the python imaging library and of course writing in python. is there any way to do this efficiently?\nThanks!","AnswerCount":6,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":12396,"Q_Id":10885984,"Users Score":2,"Answer":"As a basic optimization, it may save a little time if you create 3 lookup tables, one each for R, G, and B, to map the input value (0-255) to the output value (0-255). Looking up an array entry is probably faster than multiplying by a decimal value and rounding the result to an integer. Not sure how much faster.\nOf course, this assumes that the values should always map the same.","Q_Score":2,"Tags":"python,image-processing,python-imaging-library,rgb,pixel","A_Id":10886261,"CreationDate":"2012-06-04T18:07:00.000","Title":"Multiply each pixel in an image by a factor","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an image that is created by using a bayer filter and the colors are slightly off. I need to multiply RG and B of each pixel by a certain factor ( a different factor for R, G and B each) to get the correct color. I am using the python imaging library and of course writing in python. is there any way to do this efficiently?\nThanks!","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":12396,"Q_Id":10885984,"Users Score":0,"Answer":"If the type is numpy.ndarray just img = np.uint8(img*factor)","Q_Score":2,"Tags":"python,image-processing,python-imaging-library,rgb,pixel","A_Id":64900468,"CreationDate":"2012-06-04T18:07:00.000","Title":"Multiply each pixel in an image by a factor","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have build a simple webcam recorder on linux which works quite well.\nI get ~25fps video and good audio.\nI am porting the recorder on windows (win7) and while it works, it is unusable.\nThe QueryFrame function takes something more than 350ms, i.e 2.5fps. \nThe code is in python but the problem really seems to be the lib call.\nI tested on the same machine with the same webcam (a logitech E2500). \nOn windows, I installed openCV v2.2. I cannot check right now but the version might be a bit higher on Ubuntu. \nAny idea what could be the problem ? \nedit : I've just installed openCV2.4 and I have the same slow speed.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":565,"Q_Id":10887836,"Users Score":1,"Answer":"I had same issue and I found out that this is caused by prolonged exposure. It may be the case that Windows drivers increased exposure to increase brightness of picture. Try to point your camera to light source or manually set decreased exposure","Q_Score":1,"Tags":"python,windows,performance,opencv","A_Id":12700150,"CreationDate":"2012-06-04T20:23:00.000","Title":"QueryFrame very slow on Windows","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well?\nI think it doesn't change anything, but I'm using python\nEdit: I have done some tests and the numbers don't look similar. But I'm afraid that the similarity cannot be noticed just by looking at the numbers. Is there any theoretical feature of random number generation that guarantees that different seeds give completely independent pseudo-random numbers?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":2871,"Q_Id":10900852,"Users Score":0,"Answer":"First: define similarity. Next: code a similarity test. Then: check for similarity.\nWith only a vague description of similarity it is hard to check for it.","Q_Score":12,"Tags":"python,random,seed","A_Id":10901418,"CreationDate":"2012-06-05T16:09:00.000","Title":"May near seeds in random number generation give similar random numbers?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well?\nI think it doesn't change anything, but I'm using python\nEdit: I have done some tests and the numbers don't look similar. But I'm afraid that the similarity cannot be noticed just by looking at the numbers. Is there any theoretical feature of random number generation that guarantees that different seeds give completely independent pseudo-random numbers?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":2871,"Q_Id":10900852,"Users Score":0,"Answer":"What kind of simulation are you doing? \nFor simulation purposes your argument is valid (depending on the type of simulation) but if you implement it in an environment other than simulation, then it could be easily hacked if it requires that there are security concerns of the environment based on the generated random numbers.\nIf you are simulating the outcome of a machine whether it is harmful to society or not then the outcome of your results will not be acceptable. It requires maximum randomness in every way possible and I would never trust your reasoning.","Q_Score":12,"Tags":"python,random,seed","A_Id":10904377,"CreationDate":"2012-06-05T16:09:00.000","Title":"May near seeds in random number generation give similar random numbers?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well?\nI think it doesn't change anything, but I'm using python\nEdit: I have done some tests and the numbers don't look similar. But I'm afraid that the similarity cannot be noticed just by looking at the numbers. Is there any theoretical feature of random number generation that guarantees that different seeds give completely independent pseudo-random numbers?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":2871,"Q_Id":10900852,"Users Score":0,"Answer":"To quote the documentation from the random module:\n\nGeneral notes on the underlying Mersenne Twister core generator:\n\nThe period is 2**19937-1.\nIt is one of the most extensively tested generators in existence.\n\n\nI'd be more worried about my code being broken than my RNG not being random enough. In general, your gut feelings about randomness are going to be wrong - the Human mind is really good at finding patterns, even if they don't exist.\nAs long as you know your results aren't going to be 'secure' due to your lack of random seeding, you should be fine.","Q_Score":12,"Tags":"python,random,seed","A_Id":10905149,"CreationDate":"2012-06-05T16:09:00.000","Title":"May near seeds in random number generation give similar random numbers?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to implement an item-based news recommendation system. There are several ways I want to track a user's interest in a news item; they include: rating (1-5), favorite, click-through, and time spent on news item.\nMy question: what are some good methods to use these different metrics for the recommendation system? Maybe merge and normalize them in some way?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":614,"Q_Id":10920199,"Users Score":0,"Answer":"Recommender systems in the land of research generally work on a scale of 1 - 5. It's quite nice to get such an explicit signal from a user. However I'd imagine the reality is that most users of your system would never actually give a rating, in which case you have nothing to work with. \nTherefore I'd track page views but also try and incorporate some explicit feedback mechanism (1-5, thumbs up or down etc.)\nYour algorithm will have to take this into consideration.","Q_Score":0,"Tags":"python,metrics,recommendation-engine,personalization,cosine-similarity","A_Id":10955138,"CreationDate":"2012-06-06T18:43:00.000","Title":"Recommendation system - using different metrics","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to implement an item-based news recommendation system. There are several ways I want to track a user's interest in a news item; they include: rating (1-5), favorite, click-through, and time spent on news item.\nMy question: what are some good methods to use these different metrics for the recommendation system? Maybe merge and normalize them in some way?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":614,"Q_Id":10920199,"Users Score":0,"Answer":"For recommendation system, there are two problems:\n\nhow to quantify the user's interest in a certain item based on the numbers you collected\nhow to use the quantified interest data to recommend new items to the user\n\nI guess you are more interested in the first problem.\nTo solve the first problem, you need either linear combination or some other fancy functions to combine all the numbers. There is really no a single universal function for all systems. It heavily depends on the type of your users and your items. If you want a high quality recommandation system, you need to have some data to do machine learning to train your functions.\nFor the second problem, it's somehow the same thing, plus you need to analyze all the items to abstract some relationships between each other. You can google \"Netflix prize\" for some interesting info.","Q_Score":0,"Tags":"python,metrics,recommendation-engine,personalization,cosine-similarity","A_Id":10956591,"CreationDate":"2012-06-06T18:43:00.000","Title":"Recommendation system - using different metrics","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install epdfree on two virtually identical machines: Linux 2.6.18-308.1.1.el5, CentOS release 5.8., 64-bit machines. (BTW, I'm a bit new to python.)\nAfter the install on one machine, I run python and try to import scipy. Everything goes fine.\nOn the other machine, I follow all the same steps as far as I can tell, but when I try to import scipy, I am told \u201cImportError: No module named scipy\u201d.\nAs far as I can tell, I am doing everything the same on the two machines. I installed from the same script, I run the python in the epdfree installation directory, everything I can think of.\nDoes anyone have any idea what would keep \u201cimport scipy\u201d from working on one machine while it works fine on the other? Thanks.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":242,"Q_Id":10921655,"Users Score":0,"Answer":"The problem is that you don't have the library scipy installed, which is a totally different library of epdfree.\nyou can install it from apt-get in linux I guess, or going to their website\nwww.scipy.org","Q_Score":0,"Tags":"python,enthought","A_Id":10921702,"CreationDate":"2012-06-06T20:22:00.000","Title":"trouble with installing epdfree","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install epdfree on two virtually identical machines: Linux 2.6.18-308.1.1.el5, CentOS release 5.8., 64-bit machines. (BTW, I'm a bit new to python.)\nAfter the install on one machine, I run python and try to import scipy. Everything goes fine.\nOn the other machine, I follow all the same steps as far as I can tell, but when I try to import scipy, I am told \u201cImportError: No module named scipy\u201d.\nAs far as I can tell, I am doing everything the same on the two machines. I installed from the same script, I run the python in the epdfree installation directory, everything I can think of.\nDoes anyone have any idea what would keep \u201cimport scipy\u201d from working on one machine while it works fine on the other? Thanks.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":242,"Q_Id":10921655,"Users Score":1,"Answer":"Well, turns out there was one difference. File permissions were being set differently on the two machines. I installed epdfree as su on both machines. On the second machine, everything was locked out when I tried to run it without going under \"su\". Now my next task is to find out why the permissions were set differently. I guess it's a difference in umask settings? Well, this I won't bother anyone with. But feel free to offer an answer if you want to! Thanks.","Q_Score":0,"Tags":"python,enthought","A_Id":10922030,"CreationDate":"2012-06-06T20:22:00.000","Title":"trouble with installing epdfree","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a way to pass NumPy arrays to Matlab. \nI've managed to do this by storing the array into an image using scipy.misc.imsave and then loading it using imread, but this of course causes the matrix to contain values between 0 and 256 instead of the 'real' values. \nTaking the product of this matrix divided by 256, and the maximum value in the original NumPy array gives me the correct matrix, but I feel that this is a bit tedious. \nis there a simpler way?","AnswerCount":9,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75113,"Q_Id":10997254,"Users Score":0,"Answer":"In latest R2021a, you can pass a python numpy ndarray to double() and it will convert to a native matlab matrix, even when calling in console the numpy array it will suggest at the bottom \"Use double function to convert to a MATLAB array\"","Q_Score":44,"Tags":"python,matlab,numpy","A_Id":69231502,"CreationDate":"2012-06-12T13:02:00.000","Title":"\"Converting\" Numpy arrays to Matlab and vice versa","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is a particular kind of lossy compression that's quite easy to implement in numpy.\nI could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error.\nOther than looking at the maximum error for my actual data, does anybody have a good idea what type of distortions this creates, e.g. as a function of the magnitude of the original value? \nWould I be better off mapping all values (in 64-bits) onto say [-1,1] first (as a fraction of extreme values, which could be preserved in 64-bits) to take advantage of greater density of floats near zero?\nI'm adding a specific case I have in mind. Let's say I have 500k to 1e6 values ranging from -20 to 20, that are approximately IID ~ Normal(mu=0,sigma=4) so they're already pretty concentrated near zero and the \"20\" is ~5-sigma rare. Let's say they are scientific measurements where the true precision is a whole lot less than the 64-bit floats, but hard to really know exactly. I have tons of separate instances (potentially TB's worth) so compressing has a lot of practical value, and float32 is a quick way to get 50% (and if anything, works better with an additional round of lossless compression like gzip). So the \"-20 to 20\" eliminates a lot of concerns about really large values.","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1086,"Q_Id":11007169,"Users Score":6,"Answer":"It is unlikely that a simple transformation will reduce error significantly, since your distribution is centered around zero.\nScaling can have effect in only two ways: One, it moves values away from the denormal interval of single-precision values, (-2-126, 2-126). (E.g., if you multiply by, say, 2123 values that were in [2-249, 2-126) are mapped to [2-126, 2-3), which is outside the denormal interval.) Two, it changes where values lie in each \u201cbinade\u201d (interval from one power of two to the next). E.g., your maximum value is 20, where the relative error may be 1\/2 ULP \/ 20, where the ULP for that binade is 16*2-23 = 2-19, so the relative error may be 1\/2 * 2-19 \/ 20, about 4.77e-8. Suppose you scale by 32\/20, so values just under 20 become values just under 32. Then, when you convert to float, the relative error is at most 1\/2 * 2-19 \/ 32 (or just under 32), about 2.98e-8. So you may reduce the error slightly.\nWith regard to the former, if your values are nearly normally distributed, very few are in (-2-126, 2-126), simply because that interval is so small. (A trillion samples of your normal distribution almost certainly have no values in that interval.) You say these are scientific measurements, so perhaps they are produced with some instrument. It may be that the machine does not measure or calculate finely enough to return values that range from 2-126 to 20, so it would not surprise me if you have no values in the denormal interval at all. If you have no values in the single-precision denormal range, then scaling to avoid that range is of no use.\nWith regard to the latter, we see a small improvement is available at the end of your range. However, elsewhere in your range, some values are also moved to the high end of a binade, but some are moved across a binade boundary to the small end of a new binade, resulting in increased relative error for them. It is unlikely there is a significant net improvement.\nOn the other hand, we do not know what is significant for your application. How much error can your application tolerate? Will the change in the ultimate result be unnoticeable if random noise of 1% is added to each number? Or will the result be completely unacceptable if a few numbers change by as little as 2-200?\nWhat do you know about the machinery producing these numbers? Is it truly producing numbers more precise than single-precision floats? Perhaps, although it produces 64-bit floating-point values, the actual values are limited to a population that is representable in 32-bit floating-point. Have you performed a conversion from double to float and measured the error?\nThere is still insufficient information to rule out these or other possibilities, but my best guess is that there is little to gain by any transformation. Converting to float will either introduce too much error or it will not, and transforming the numbers first is unlikely to alter that.","Q_Score":4,"Tags":"python,numpy,floating-point,compression","A_Id":11027196,"CreationDate":"2012-06-13T01:42:00.000","Title":"What should I worry about if I compress float64 array to float32 in numpy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is a particular kind of lossy compression that's quite easy to implement in numpy.\nI could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error.\nOther than looking at the maximum error for my actual data, does anybody have a good idea what type of distortions this creates, e.g. as a function of the magnitude of the original value? \nWould I be better off mapping all values (in 64-bits) onto say [-1,1] first (as a fraction of extreme values, which could be preserved in 64-bits) to take advantage of greater density of floats near zero?\nI'm adding a specific case I have in mind. Let's say I have 500k to 1e6 values ranging from -20 to 20, that are approximately IID ~ Normal(mu=0,sigma=4) so they're already pretty concentrated near zero and the \"20\" is ~5-sigma rare. Let's say they are scientific measurements where the true precision is a whole lot less than the 64-bit floats, but hard to really know exactly. I have tons of separate instances (potentially TB's worth) so compressing has a lot of practical value, and float32 is a quick way to get 50% (and if anything, works better with an additional round of lossless compression like gzip). So the \"-20 to 20\" eliminates a lot of concerns about really large values.","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":1086,"Q_Id":11007169,"Users Score":2,"Answer":"The exponent for float32 is quite a lot smaller (or bigger in the case of negative exponents), but assuming all you numbers are less than that you only need to worry about the loss of precision. float32 is only good to about 7 or 8 significant decimal digits","Q_Score":4,"Tags":"python,numpy,floating-point,compression","A_Id":11007250,"CreationDate":"2012-06-13T01:42:00.000","Title":"What should I worry about if I compress float64 array to float32 in numpy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is a particular kind of lossy compression that's quite easy to implement in numpy.\nI could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error.\nOther than looking at the maximum error for my actual data, does anybody have a good idea what type of distortions this creates, e.g. as a function of the magnitude of the original value? \nWould I be better off mapping all values (in 64-bits) onto say [-1,1] first (as a fraction of extreme values, which could be preserved in 64-bits) to take advantage of greater density of floats near zero?\nI'm adding a specific case I have in mind. Let's say I have 500k to 1e6 values ranging from -20 to 20, that are approximately IID ~ Normal(mu=0,sigma=4) so they're already pretty concentrated near zero and the \"20\" is ~5-sigma rare. Let's say they are scientific measurements where the true precision is a whole lot less than the 64-bit floats, but hard to really know exactly. I have tons of separate instances (potentially TB's worth) so compressing has a lot of practical value, and float32 is a quick way to get 50% (and if anything, works better with an additional round of lossless compression like gzip). So the \"-20 to 20\" eliminates a lot of concerns about really large values.","AnswerCount":3,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":1086,"Q_Id":11007169,"Users Score":7,"Answer":"The following assumes you are using standard IEEE-754 floating-point operations, which are common (with some exceptions), in the usual round-to-nearest mode.\nIf a double value is within the normal range of float values, then the only change that occurs when the double is rounded to a float is that the significand (fraction portion of the value) is rounded from 53 bits to 24 bits. This will cause an error of at most 1\/2 ULP (unit of least precision). The ULP of a float is 2-23 times the greatest power of two not greater than the float. E.g., if a float is 7.25, the greatest power of two not greater than it is 4, so its ULP is 4*2-23 = 2-21, about 4.77e-7. So the error when double in the interval [4, 8) is converted to float is at most 2-22, about 2.38e-7. For another example, if a float is about .03, the greatest power of two not greater than it is 2-6, so the ULP is 2-29, and the maximum error when converting to double is 2-30.\nThose are absolute errors. The relative error is less than 2-24, which is 1\/2 ULP divided by the smallest the value could be (the smallest value in the interval for a particular ULP, so the power of two that bounds it). E.g., for each number x in [4, 8), we know the number is at least 4 and error is at most 2-22, so the relative error is at most 2-22\/4 = 2-24. (The error cannot be exactly 2-24 because there is no error when converting an exact power of two from float to double, so there is an error only if x is greater than four, so the relative error is less than, not equal to, 2-24.) When you know more about the value being converted, e.g., it is nearer 8 than 4, you can bound the error more tightly.\nIf the number is outside the normal range of a float, errors can be larger. The maximum finite floating-point value is 2128-2104, about 3.40e38. When you convert a double that is 1\/2 ULP (of a float; doubles have finer ULP) more than that or greater to float, infinity is returned, which is, of course, an infinite absolute error and an infinite relative error. (A double that is greater than the maximum finite float but is greater by less than 1\/2 ULP is converted to the maximum finite float and has the same errors discussed in the previous paragraph.)\nThe minimum positive normal float is 2-126, about 1.18e-38. Numbers within 1\/2 ULP of this (inclusive) are converted to it, but numbers less than that are converted to a special denormalized format, where the ULP is fixed at 2-149. The absolute error will be at most 1\/2 ULP, 2-150. The relative error will depend significantly on the value being converted.\nThe above discusses positive numbers. The errors for negative numbers are symmetric.\nIf the value of a double can be represented exactly as a float, there is no error in conversion.\nMapping the input numbers to a new interval can reduce errors in specific situations. As a contrived example, suppose all your numbers are integers in the interval [248, 248+224). Then converting them to float would lose all information that distinguishes the values; they would all be converted to 248. But mapping them to [0, 224) would preserve all information; each different input would be converted to a different result.\nWhich map would best suit your purposes depends on your specific situation.","Q_Score":4,"Tags":"python,numpy,floating-point,compression","A_Id":11019850,"CreationDate":"2012-06-13T01:42:00.000","Title":"What should I worry about if I compress float64 array to float32 in numpy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In rtree, how can I specify the threshold for float equality testing?\nWhen checking nearest neighbours, rtree can return more than the specified number of results, as if two points are equidistant, it returns both them. To check this equidistance, it must have some threshold since the distances are floats. I want to be able to control this threshold.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":236,"Q_Id":11025297,"Users Score":0,"Answer":"Actually it does not need to have a threshold to handle ties. They just happen.\nAssuming you have the data points (1.,0.) and (0.,1.) and query point (0.,0.), any implementation I've seen of Euclidean distance will return the exact same distance for both, without any threshold.","Q_Score":2,"Tags":"python,indexing,spatial-index,spatial-query,r-tree","A_Id":11028685,"CreationDate":"2012-06-14T00:37:00.000","Title":"In rtree, how can I specify the threshold for float equality testing?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis.","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":135225,"Q_Id":11077023,"Users Score":61,"Answer":"Numpy is required by pandas (and by virtually all numerical tools for Python). Scipy is not strictly required for pandas but is listed as an \"optional dependency\". I wouldn't say that pandas is an alternative to Numpy and\/or Scipy. Rather, it's an extra tool that provides a more streamlined way of working with numerical and tabular data in Python. You can use pandas data structures but freely draw on Numpy and Scipy functions to manipulate them.","Q_Score":202,"Tags":"python,numpy,scipy,pandas","A_Id":11077060,"CreationDate":"2012-06-18T04:45:00.000","Title":"What are the differences between Pandas and NumPy+SciPy in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":135225,"Q_Id":11077023,"Users Score":327,"Answer":"pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool, similar to MATLAB. pandas on the other hand provides rich time series functionality, data alignment, NA-friendly statistics, groupby, merge and join methods, and lots of other conveniences. It has become very popular in recent years in financial applications. I will have a chapter dedicated to financial data analysis using pandas in my upcoming book.","Q_Score":202,"Tags":"python,numpy,scipy,pandas","A_Id":11077215,"CreationDate":"2012-06-18T04:45:00.000","Title":"What are the differences between Pandas and NumPy+SciPy in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to ask if anyone has an idea or example of how to do support vector regression in python with high dimensional output( more than one) using a python binding of libsvm? I checked the examples and they are all assuming the output to be one dimensional.","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":3847,"Q_Id":11083921,"Users Score":4,"Answer":"libsvm might not be the best tool for this task.\nThe problem you describe is called multivariate regression, and usually for regression problems, SVM's are not necessarily the best choice.\nYou could try something like group lasso (http:\/\/www.di.ens.fr\/~fbach\/grouplasso\/index.htm - matlab) or sparse group lasso (http:\/\/spams-devel.gforge.inria.fr\/ - seems to have a python interface), which solve the multivariate regression problem with different types of regularization.","Q_Score":3,"Tags":"python,machine-learning,svm,regression,libsvm","A_Id":11172695,"CreationDate":"2012-06-18T13:27:00.000","Title":"Support Vector Regression with High Dimensional Output using python's libsvm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to read in a dictionary file to filter content specified in the hdfs_input, and I have uploaded it to the cluster using the put command, but I don't know how to access it in my program.\nI tried to access it using path on the cluster like normal files, but it gives the error information: IOError: [Errno 2] No such file or directory\nBesides, is there any way to maintain only one copy of the dictionary for all the machines that runs the job ?\nSo what's the correct way of access files other than the specified input in hadoop jobs?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":91,"Q_Id":11095220,"Users Score":0,"Answer":"Problem solved by adding the file needed with the -file option or file= option in conf file.","Q_Score":0,"Tags":"python,hadoop","A_Id":11098023,"CreationDate":"2012-06-19T06:00:00.000","Title":"How to read other files in hadoop jobs?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The following saves floating values of a matrix into textfiles\nnumpy.savetxt('bool',mat,fmt='%f',delimiter=',')\nHow to save a boolean matrix ? what is the fmt for saving boolean matrix ?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":3549,"Q_Id":11100066,"Users Score":3,"Answer":"Thats correct, bools are integers, so you can always go between the two. \n\n\nimport numpy as np\narr = np.array([True, True, False, False])\nnp.savetxt(\"test.txt\", arr, fmt=\"%5i\")\n\n\nThat gives a file with 1 1 0 0","Q_Score":2,"Tags":"python,numpy,save,boolean","A_Id":11101558,"CreationDate":"2012-06-19T11:33:00.000","Title":"how to save a boolean numpy array to textfile in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two dataframes, both indexed by timeseries. I need to add the elements together to form a new dataframe, but only if the index and column are the same. If the item does not exist in one of the dataframes then it should be treated as a zero.\nI've tried using .add but this sums regardless of index and column. Also tried a simple combined_data = dataframe1 + dataframe2 but this give a NaN if both dataframes don't have the element.\nAny suggestions?","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":102462,"Q_Id":11106823,"Users Score":2,"Answer":"Both the above answers - fillna(0) and a direct addition would give you Nan values if either of them have different structures.\nIts Better to use fill_value\ndf.add(other_df, fill_value=0)","Q_Score":55,"Tags":"python,pandas","A_Id":42273797,"CreationDate":"2012-06-19T18:11:00.000","Title":"Adding two pandas dataframes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists?\nI am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":2302,"Q_Id":11109524,"Users Score":1,"Answer":"The csv module does load the data lazily, one row at a time.","Q_Score":5,"Tags":"python,csv,clojure,lazy-evaluation","A_Id":11109571,"CreationDate":"2012-06-19T21:13:00.000","Title":"Can csv data be made lazy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists?\nI am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python.","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":2302,"Q_Id":11109524,"Users Score":2,"Answer":"Python's reader or DictReader are generators. A row is produced only when the object's next() method is called.","Q_Score":5,"Tags":"python,csv,clojure,lazy-evaluation","A_Id":11109589,"CreationDate":"2012-06-19T21:13:00.000","Title":"Can csv data be made lazy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists?\nI am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python.","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":2302,"Q_Id":11109524,"Users Score":6,"Answer":"The csv module's reader is lazy by default.\nIt will read a line in at a time from the file, parse it to a list, and return that list.","Q_Score":5,"Tags":"python,csv,clojure,lazy-evaluation","A_Id":11109568,"CreationDate":"2012-06-19T21:13:00.000","Title":"Can csv data be made lazy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently using FileStorage class for storing matrices XML\/YAML using OpenCV C++ API.\nHowever, I have to write a Python Script that reads those XML\/YAML files.\nI'm looking for existing OpenCV Python API that can read the XML\/YAML files generated by OpenCV C++ API","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":16840,"Q_Id":11141336,"Users Score":0,"Answer":"pip install opencv-contrib-python for video support to install specific version use pip install opencv-contrib-python","Q_Score":21,"Tags":"c++,python,image-processing,opencv","A_Id":60879363,"CreationDate":"2012-06-21T15:18:00.000","Title":"FileStorage for OpenCV Python API","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to implement the solution to a 0\/1 Knapsack problem with constraints.\nMy problem will have in most cases few variables (~ 10-20, at most 50).\nI recall from university that there are a number of algorithms that in many cases perform better than brute force (I'm thinking, for example, to a branch and bound algorithm).\nSince my problem is relative small, I'm wondering if there is an appreciable advantange in terms of efficiency when using a sophisticate solution as opposed to brute force.\nIf it helps, I'm programming in Python.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1291,"Q_Id":11154101,"Users Score":1,"Answer":"You can either use pseudopolynomial algorithm, which uses dynamic programming, if the sum of weights is small enough. You just calculate, whether you can get weight X with first Y items for each X and Y.\nThis runs in time O(NS), where N is number of items and S is sum of weights.\nAnother possibility is to use meet-in-the middle approach. \nPartition items into two halves and:\nFor the first half take every possible combination of items (there are 2^(N\/2) possible combinations in each half) and store its weight in some set.\nFor the second half take every possible combination of items and check whether there is a combination in first half with suitable weight.\nThis should run in O(2^(N\/2)) time.","Q_Score":4,"Tags":"python,algorithm,knapsack-problem","A_Id":11155580,"CreationDate":"2012-06-22T10:02:00.000","Title":"0\/1 Knapsack with few variables: which algorithm?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are there any C++ (or C) libs that have NumPy-like arrays with support for slicing, vectorized operations, adding and subtracting contents element-by-element, etc.?","AnswerCount":13,"Available Count":1,"Score":0.0614608973,"is_accepted":false,"ViewCount":81916,"Q_Id":11169418,"Users Score":4,"Answer":"Use LibTorch (PyTorch frontend for C++) and be happy.","Q_Score":104,"Tags":"c++,arrays,python-3.x,numpy,dynamic-arrays","A_Id":57324664,"CreationDate":"2012-06-23T12:15:00.000","Title":"NumPy style arrays for C++?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Which is the best to way to export openerp data to csv\/xls file using python so that i can schedule it in openerp( i cant use the client side exporting)? \n\nusing csv python package\nusing xlwt python package\nor any other package?\n\nAnd also how can I dynamically provide the path and name to save this newly created csv file","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":4385,"Q_Id":11187086,"Users Score":3,"Answer":"Why not to use Open ERP client it self.\nyou can go for xlwt if you really require to write a python program to generate it.","Q_Score":0,"Tags":"python,export-to-excel,openerp,export-to-csv","A_Id":11187374,"CreationDate":"2012-06-25T09:57:00.000","Title":"The best way to export openerp data to csv file using python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry if this all seem nooby and unclear, but I'm currently learning Netlogo to model agent-based collective behavior and would love to hear some advice on alternative software choices. My main thing is that I'd very much like to take advantage of PyCuda since, from what I understand, it enables parallel computation. However, does that mean I still have to write the numerical script in some other environment and implement the visuals in yet another one???\nIf so, my questions are:\n\nWhat numerical package should I use? PyEvolve, DEAP, or something else? It appears that PyEvolve is no longer being developed and DEAP is just a wrapper on the outdated(?) EAP.\nGraphic-wise, I find mayavi2 and vtk promising. The problem is, none of the numerical package seems to bind to these readily. Is there no better alternative than to save the numerical output to datafile and feed them into, say, mayavi2?\nAnother option is to generate the data via Netlogo and feed them into a graphing package from (2). Is there any disadvantage to doing this?\n\nThank you so much for shedding light on this confusion.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1399,"Q_Id":11198288,"Users Score":1,"Answer":"You almost certainly do not want to use CUDA unless you are running into a significant performance problem. In general CUDA is best used for solving floating point linear algebra problems. If you are looking for a framework built around parallel computations, I'd look towards OpenCL which can take advantage of GPUs if needed.. \nIn terms of visualization, I'd strongly suggest targeting a a specific data interchange format and then letting some other program do that rendering for you. The only reason I'd use something like VTK is if for some reason you need more control over the visualization process or you are looking for a real time solution.","Q_Score":2,"Tags":"python,netlogo,pycuda,agent-based-modeling,mayavi","A_Id":11198804,"CreationDate":"2012-06-25T22:29:00.000","Title":"ABM under python with advanced visualization","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"It is pretty easy to split a rectangle\/square into smaller regions and enforce a maximum area of each sub-region. You can just divide the region into regions with sides length sqrt(max_area) and treat the leftovers with some care.\nWith a quadrilateral however I am stumped. Let's assume I don't know the angle of any of the corners. Let's also assume that all four points are on the same plane. Also, I don't need for the the small regions to be all the same size. The only requirement I have is that the area of each individual region is less than the max area.\nIs there a particular data structure I could use to make this easier?\nIs there an algorithm I'm just not finding?\nCould I use quadtrees to do this? I'm not incredibly versed in trees but I do know how to implement the structure.\nI have GIS work in mind when I'm doing this, but I am fairly confident that that will have no impact on the algorithm to split the quad.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1141,"Q_Id":11217855,"Users Score":1,"Answer":"You could recursively split the quad in half on the long sides until the resulting area is small enough.","Q_Score":1,"Tags":"python,math,geometry,gis","A_Id":11217921,"CreationDate":"2012-06-27T00:39:00.000","Title":"Split quadrilateral into sub-regions of a maximum area","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to track topic popularity on a very large number of documents. Furthermore, I would like to give recommendations to users based on topics, rather than the usual bag of words model.\nTo extract the topics I use natural language processing techniques that are beyond the point of this post.\nMy question is how should I persist this data so that:\nI) I can quickly fetch trending data for each topic (in principle, every time a user opens a document, the topics in that document should go up in popularity)\nII) I can quickly compare documents to provide recommendations (here I am thinking of using clustering techniques)\nMore specifically, my questions are:\n1) Should I go with the usual way of storing text mining data? meaning storing a topic occurrence vector for each document, so that I can later measure the euclidean distance between different documents.\n2) Some other way?\nI am looking for specific python ways to do this. I have looked into SQL and NoSQL databases, and also into pytables and h5py, but I am not sure how I would go about implementing a system like this. One of my concerns is how can I deal with an ever growing vocabulary of topics?\nThank you very much","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":612,"Q_Id":11267143,"Users Score":1,"Answer":"Why not have simple SQL tables\nTables:\n\ndocuments with a primary key of id or file name or something\nobservations with foreign key into documents and the term (indexed on both fields probably unique)\n\nThe array approach you mentioned seems like a slow way to get at terms.\nWith sql you can easily allow new terms be added to the observations table.\nEasy to aggregate and even do trending stuff by aggregating by date if the documents table includes a timestamp.","Q_Score":3,"Tags":"python,database,data-mining,text-mining","A_Id":11267361,"CreationDate":"2012-06-29T18:32:00.000","Title":"Storing text mining data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for an algorithm to compare two images (I work with python).\nI find the PIL library, numpy\/scipy and opencv. I know how to transform in greyscale, binary, make an histogram, .... that's ok but I don't know what I have to do with the two images to say \"yes they're similar \/\/ they're probably similar \/\/ they don't match\".\nDo you know the right way to go about it ?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1355,"Q_Id":11289652,"Users Score":1,"Answer":"If you want to check if they are binary equal you can count a checksum on them and compare it. If you want to check if they are similar in some other way , it will be more complicated and definitely would not fit into simple answer posted on Stack Overflow. It just depends on how you define similarity but anyway it would require good programming skills and a lot of code written.","Q_Score":0,"Tags":"python,image,compare","A_Id":11289709,"CreationDate":"2012-07-02T07:50:00.000","Title":"How to compare image with python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have recently installed numpy due to ease using the exe installer for Python 2.7. However, when I attempt to install IPython, Pandas or Matplotlib using the exe file, I consistently get a variant of the following error right after the installation commeces (pandas in the following case):\npandas-0.8.0.win32-py2.7[1].exe has stopped working\nThe problem caused the program to stop working correctly: windows close the program and notify whether a solution is available.\nNumPy just worked fine when I installed it. This is extremely frustrating and I would appreciate any insight. \nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1016,"Q_Id":11289670,"Users Score":0,"Answer":"This happened to me too. It works if you right click and 'Run As Administrator'","Q_Score":0,"Tags":"python,numpy,matplotlib,ipython,pandas","A_Id":11616216,"CreationDate":"2012-07-02T07:52:00.000","Title":".EXE installer crashes when installing Python modules: IPython, Pandas and Matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using pandas in python. I have several Series indexed by dates that I would like to concat into a single DataFrame, but the Series are of different lengths because of missing dates etc. I would like the dates that do match up to match up, but where there is missing data for it to be interpolated or just use the previous date or something like that. What is the easiest way to do this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1369,"Q_Id":11298097,"Users Score":1,"Answer":"If the Series are in a dict data, you need only do:\nframe = DataFrame(data)\nThat puts things into a DataFrame and unions all the dates. If you want to fill values forward, you can call frame = frame.fillna(method='ffill').","Q_Score":2,"Tags":"python,dataframe,concat,pandas","A_Id":11312776,"CreationDate":"2012-07-02T17:06:00.000","Title":"concatenating TimeSeries of different lengths using Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to change my seterr defaults to be either all 'warn' or all 'ignore'. This can be done interactively by doing np.seterr(all='ignore'). Is there a way to make it a system default? There is no .numpyrc as far as I can tell; is there some other configuration file where these defaults can be changed?\n(I'm using numpy 1.6.1)\nEDIT: The problem was not that numpy's default settings had changed, as I had incorrectly suspected, but that another code, pymc, was changing things that are normally ignore or warn to raise, causing all sorts of undesired and unexpected crashes.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":880,"Q_Id":11351264,"Users Score":1,"Answer":"There is no configuration file for this. You will have to call np.seterr() yourself.","Q_Score":1,"Tags":"python,numpy","A_Id":11359378,"CreationDate":"2012-07-05T19:31:00.000","Title":"Change numpy.seterr defaults?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm tokenizing text with nltk, just sentences fed to wordpunct_tokenizer. This splits contractions (e.g. 'don't' to 'don' +\" ' \"+'t') but I want to keep them as one word. I'm refining my methods for a more measured and precise tokenization of text, so I need to delve deeper into the nltk tokenization module beyond simple tokenization. \nI'm guessing this is common and I'd like feedback from others who've maybe had to deal with the particular issue before.\nedit: \nYeah this a general, splattershot question I know\nAlso, as a novice to nlp, do I need to worry about contractions at all?\nEDIT: \nThe SExprTokenizer or TreeBankWordTokenizer seems to do what I'm looking for for now.","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":12956,"Q_Id":11351290,"Users Score":2,"Answer":"Because the number of contractions are very minimal, one way to do it is to search and replace all contractions to it full equivalent (Eg: \"don't\" to \"do not\") and then feed the updated sentences into the wordpunct_tokenizer.","Q_Score":19,"Tags":"python,nlp,nltk","A_Id":11355186,"CreationDate":"2012-07-05T19:32:00.000","Title":"nltk tokenization and contractions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to customize Serie (in a simple way, and DataFrame by the way :p) from pandas to append extras informations on the display and in the plots? A great thing will be to have the possibility to append informations like \"unit\", \"origin\" or anything relevant for the user that will not be lost during computations, like the \"name\" parameter.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":148,"Q_Id":11362376,"Users Score":1,"Answer":"Right now there is not an easy way to maintain metadata on pandas objects across computations.\nMaintaining metadata has been an open discussion on github for some time now but we haven't had to time code it up.\nWe'd welcome any additional feedback you have (see pandas on github) and would love to accept a pull-request if you're interested in rolling your own.","Q_Score":1,"Tags":"python,pandas","A_Id":11401559,"CreationDate":"2012-07-06T12:39:00.000","Title":"Append extras informations to Series in Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"While plotting using Matplotlib, I have found how to change the font size of the labels.\nBut, how can I change the size of the numbers in the scale?\nFor clarity, suppose you plot x^2 from (x0,y0) = 0,0 to (x1,y1) = (20,20).\nThe scale in the x-axis below maybe something like \n\n0 1 2 ... 20.\n\nI want to change the font size of such scale of the x-axis.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44462,"Q_Id":11379910,"Users Score":0,"Answer":"simply put, you can use the following command to set the range of the ticks and change the size of the ticks\nimport matplotlib.pyplot as plt\nset the range of ticks for x-axis and y-axis\nplt.set_yticks(range(0,24,2))\nplt.set_xticks(range(0,24,2))\nchange the size of ticks for x-axis and y-axis\nplt.yticks(fontsize=12,)\nplt.xticks(fontsize=12,)","Q_Score":14,"Tags":"python,matplotlib","A_Id":68635240,"CreationDate":"2012-07-08T01:03:00.000","Title":"How do I change the font size of the scale in matplotlib plots?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In NLTK, using a naive bayes classifier, I know from examples its very simply to use a \"bag of words\" approach and look for unigrams or bigrams or both. Could you do the same using two completely different sets of features?\nFor instance, could I use unigrams and length of the training set (I know this has been mentioned once on here)? But of more interest to me would be something like bigrams and \"bigrams\" or combinations of the POS that appear in the document?\nIs this beyond the power of the basic NLTK classifier?\nThanks\nAlex","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1502,"Q_Id":11460115,"Users Score":5,"Answer":"NLTK classifiers can work with any key-value dictionary. I use {\"word\": True} for text classification, but you could also use {\"contains(word)\": 1} to achieve the same effect. You can also combine many features together, so you could have {\"word\": True, \"something something\": 1, \"something else\": \"a\"}. What matters most is that your features are consistent, so you always have the same kind of keys and a fixed set of possible values. Numeric values can be used, but the classifier isn't smart about them - it will treat numbers as discrete values, so that 99 and 100 are just as different as 1 and 100. If you want numbers to be handled in a smarter way, then I recommend using scikit-learn classifiers.","Q_Score":2,"Tags":"python,nlp,nltk","A_Id":11462417,"CreationDate":"2012-07-12T20:28:00.000","Title":"NLTK multiple feature sets in one classifier?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I will need to create array of integer arrays like [[0,1,2],[4,4,5,7]...[4,5]]. The size of internal arrays changeable. Max number of internal arrays is 2^26. So what do you recommend for the fastest way for updating this array. \nWhen I use list=[[]] * 2^26 initialization is very fast but update is very slow. Instead I use \nlist=[] , for i in range(2**26): list.append.([]) . \nNow initialization is slow, update is fast. For example, for 16777216 internal array and 0.213827311993 avarage number of elements on each array for 2^26-element array it takes 1.67728900909 sec. It is good but I will work much bigger datas, hence I need the best way. Initialization time is not important.\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":497,"Q_Id":11517143,"Users Score":0,"Answer":"What you ask is quite of a problem. Different data structures have different properties. In general, if you need quick access, do not use lists! They have linear access time, which means, the more you put in them, the longer it will take in average to access an element. \nYou could perhaps use numpy? That library has matrices that can be accessed quite fast, and can be reshaped on the fly. However, if you want to add or delete rows, it will might be a bit slow because it generally reallocates (thus copies) the entire data. So it is a trade off. \nIf you are gonna have so many internal arrays of different sizes, perhaps you could have a dictionary that contains the internal arrays. I think if it is indexed by integers it will be much faster than a list. Then, the internal arrays could be created with numpy.","Q_Score":1,"Tags":"python,performance","A_Id":11517362,"CreationDate":"2012-07-17T06:33:00.000","Title":"Fast access and update integer matrix or array in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to get the suggestion on using No-SQL datastore for my particular requirements.\nLet me explain:\n I have to process the five csv files. Each csv contains 5 million rows and also The common id field is presented in each csv.So, I need to merge all csv by iterating 5 million rows.So, I go with python dictionary to merge all files based on the common id field.But here the bottleneck is you can't store the 5 million keys in memory(< 1gig) with python-dictionary.\nSo, I decided to use No-Sql.I think It might be helpful to process the 5 million key value storage.Still I didn't have clear thoughts on this.\nAnyway we can't reduce the iteration since we have the five csvs each has to be iterated for updating the values.\nIs it there an simple steps to go with that?\n If this is the way Could you give me the No-Sql datastore to process the key-value pair?\nNote: We have the values as list type also.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":347,"Q_Id":11522232,"Users Score":0,"Answer":"If this is just a one-time process, you might want to just setup an EC2 node with more than 1G of memory and run the python scripts there. 5 million items isn't that much, and a Python dictionary should be fairly capable of handling it. I don't think you need Hadoop in this case.\nYou could also try to optimize your scripts by reordering the items in several runs, than running over the 5 files synchronized using iterators so that you don't have to keep everything in memory at the same time.","Q_Score":1,"Tags":"python,nosql","A_Id":11522576,"CreationDate":"2012-07-17T12:15:00.000","Title":"Process 5 million key-value data in python.Will NoSql solve?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an image that I load using cv2.imread(). This returns an NumPy array. However, I need to pass this into a 3rd party API that requires the data in IplImage format.\nI've scoured everything I could and I've found instances of converting from IplImage to CvMat,and I've found some references to converting in C++, but not from NumPy to IplImage in Python. Is there a function that is provided that can do this conversion?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":10288,"Q_Id":11528009,"Users Score":0,"Answer":"2-way to apply:\n\nimg = cv2.imread(img_path)\nimg_buf = cv2.imencode('.jpg', img)[1].tostring()\njust read the image file:\nimg_buf = open(img_path, 'rb').read()","Q_Score":8,"Tags":"python,opencv","A_Id":51053926,"CreationDate":"2012-07-17T17:48:00.000","Title":"OpenCV: Converting from NumPy to IplImage in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an array that looks something like:\n[0 x1 0 0 y1 0 z1 \n 0 0 x2 0 y2 0 z2 \n 0 0 x3 0 0 y3 z3 \n 0 0 x4 0 0 y4 z4 \n 0 x5 0 0 0 y5 z5 \n 0 0 0 0 y6 0 0] \nI need to determine set of connected line (i.e. line that connects to the points [x1,x2,x3..], [y1,y2,y3...], [z1,z2,z3..]) from the array and then need to find maximum value in each line i.e. max{x1,x2,x3,...}, max{y1,y2,y3..} etc. i was trying to do nearest neighbor search using kdtree but it return the same array. I have array of the size (200 x 8000). is there any easier way to do this? Thx.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":526,"Q_Id":11543991,"Users Score":1,"Answer":"I don't know of anything which provides the functionality you desire out of the box. If you have already written the logic, and it is just slow, have you considered Cython-ing your code. For simple typed looping operations you could get a significant speedup.","Q_Score":1,"Tags":"python,numpy,nearest-neighbor","A_Id":11564801,"CreationDate":"2012-07-18T14:41:00.000","Title":"How to determine set of connected line from an array in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to read a very large (1.7 million records) csv file to a numpy record array. Two of the columns are strings that need to be converted to datetime objects. Additionally, one column needs to be the calculated difference between those datetimes.\nAt the moment I made a custom iterator class that builds a list of lists. I then use np.rec.fromrecords to convert it to the array. \nHowever, I noticed that calling datetime.strptime() so many times really slows things down. I was wondering if there was a more efficient way to do these conversions. The times are accurate to the second within the span of a date. So, assuming that the times are uniformly distributed (they're not), it seems like I'm doing 20x more conversions that necessary (1.7 million \/ (60 X 60 X 24). \nWould it be faster to store converted values in a dictionary {string dates: datetime obj} and first check the dictionary, before doing unnecessary conversions? \nOr should I be using numpy functions (I am still new to the numpy library)?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":420,"Q_Id":11584856,"Users Score":0,"Answer":"I could be wrong, but it seems to me like your issue is having repeated occurrences, thus doing the same conversion more times than necessary. IF that interpretation is correct, the most efficient method would depend on how many repeats there are. If you have 100,000 repeats out of 1.7 million, then writing 1.6 million to a dictionary and checking it 1.7 million times might not be more efficient, since it does 1.6+1.7million read\/writes. However, if you have 1 million repeats, then returning an answer (O(1)) for those rather than doing the conversion an extra million times would be much faster. \nAll-in-all, though, python is very slow and you might not be able to speed this up much at all, given that you are using 1.7 million inputs. As for numpy functions, I'm not that well versed in it either, but I believe there's some good documentation for it online.","Q_Score":1,"Tags":"python,numpy","A_Id":11585354,"CreationDate":"2012-07-20T18:18:00.000","Title":"How to efficiently convert dates in numpy record array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm extracting mass data from a legacy backend system using C\/C++ and move it to Python using distutils. After obtaining the data in Python, I put it into a pandas DataFrame object for data analysis. Now I want to go faster and would like to avoid the second step. \nIs there a C\/C++ API for pandas to create a DataFrame in C\/C++, add my C\/C++ data and pass it to Python? I'm thinking of something that is similar to numpy C API.\nI already thougth of creating numpy array objects in C as a workaround but i'm heavily using timeseries data and would love to have the TimeSeries and date_range objects as well.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":28186,"Q_Id":11607387,"Users Score":9,"Answer":"All the pandas classes (TimeSeries, DataFrame, DatetimeIndex etc.) have pure-Python definitions so there isn't a C API. You might be best off passing numpy ndarrays from C to your Python code and letting your Python code construct pandas objects from them.\nIf necessary you could use PyObject_CallFunction etc. to call the pandas constructors, but you'd have to take care of accessing the names from module imports and checking for errors.","Q_Score":20,"Tags":"python,c,api,pandas","A_Id":11610785,"CreationDate":"2012-07-23T06:24:00.000","Title":"Is there a C\/C++ API for python pandas?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am exploring switching to python and pandas as a long-time SAS user. \nHowever, when running some tests today, I was surprised that python ran out of memory when trying to pandas.read_csv() a 128mb csv file. It had about 200,000 rows and 200 columns of mostly numeric data.\nWith SAS, I can import a csv file into a SAS dataset and it can be as large as my hard drive. \nIs there something analogous in pandas? \nI regularly work with large files and do not have access to a distributed computing network.","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":76790,"Q_Id":11622652,"Users Score":1,"Answer":"You can use Pytable rather than pandas df.\nIt is designed for large data sets and the file format is in hdf5.\nSo the processing time is relatively fast.","Q_Score":96,"Tags":"python,pandas,sas","A_Id":42165467,"CreationDate":"2012-07-24T00:50:00.000","Title":"Large, persistent DataFrame in pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like <92>,<89>, <94> etc. \nAny thoughts? I tried doing string.encode('utf-8') before writing to csv but that gave an error that UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 905: ordinal not in range(128)","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1903,"Q_Id":11705114,"Users Score":0,"Answer":"Are all these \"junk\" characters in the range <80> to <9F>? If so, it's highly likely that they're Microsoft \"Smart Quotes\" (Windows-125x encodings). Someone wrote up the text in Word or Outlook, and copy\/pasted it into a Web application. Both Latin-1 and UTF-8 regard these characters as control characters, and the usual effect is that the text display gets cut off (Latin-1) or you see a ?-in-black-diamond-invalid-character (UTF-8).\nNote that Word and Outlook, and some other MS products, provide a UTF-8 version of the text for clipboard use. Instead of <80> to <9F> codes, Smart Quotes characters will be proper multibyte UTF-8 sequences. If your Web page is in UTF-8, you should normally get a proper UTF-8 character instead of the Smart Quote in Windows-125x encoding. Also note that this is not guaranteed behavior, but \"seems to work pretty consistently\". It all depends on a UTF-8 version of the text being available, and properly handled (i.e., you didn't paste into, say, gvim on the PC, and then copy\/paste into a Web text form). This may well also work for various PC applications, so long as they are looking for UTF-8-encoded text.","Q_Score":1,"Tags":"python,mysql,vim,encoding,smart-quotes","A_Id":18619898,"CreationDate":"2012-07-28T22:20:00.000","Title":"Junk characters (smart quotes, etc.) in output file","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've never done any image processing and I was wondering if someone can nudge me in the right direction. \nHere's my issue: I have a bunch of images of black and white images of places around a city. Due to some problems with the camera system, some images contain nothing but a black image with a white vignette around the edge. This vignette is noisy and non-uniform (sometimes it can be on both sides, other times only one).\nWhat are some good ways I can go about detecting these frames? I would just need to be able to write a bit.\nMy image set is huge, so I would need this to be an automated process and in the end it should use Python since it needs to integrate into my existing code.\nI was thinking some sort of machine learning algorithm but I'm not sure what to do beyond that.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1550,"Q_Id":11733106,"Users Score":2,"Answer":"If I understand you correctly then you have complete black images with white borders?\nIn this case I think the easiest approach is to compute a histogram of the intensity values of the pixels, i.e. how \u201edark\/bright\u201d is the overall image. I guess that the junk images are significantly darker than the non-junk images. You can then filter the images based on their histogram. For that you have to choose a threshold: Every image darker than this threshold is considered as junk.\nIf this approach is to fuzzy you can easily improve it. For example: just compute the histogram of the inner image without the edges, because this makes the histogram much more darker in comparison to non-junk images.","Q_Score":2,"Tags":"python,image-processing,machine-learning,computer-vision","A_Id":11736635,"CreationDate":"2012-07-31T04:28:00.000","Title":"Detecting Halo in Images","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using rpy2 and I have this issue that's bugging me: I know how to convert a Python array or list to a FloatVector that R (thanks to rpy2) can handle within Python, but I don't know if the opposite can be done, say, I have a FloatVector or Matrix that R can handle and convert it back to a Python array or list...can this be done?\nThanks in advance!","AnswerCount":4,"Available Count":1,"Score":0.1488850336,"is_accepted":false,"ViewCount":5269,"Q_Id":11769471,"Users Score":3,"Answer":"In the latest version of rpy2, you can simply do this in a direct way:\n\nimport numpy as np\narray=np.array(vector_R)","Q_Score":9,"Tags":"python,rpy2","A_Id":52399670,"CreationDate":"2012-08-02T00:30:00.000","Title":"rpy2: Convert FloatVector or Matrix back to a Python array or list?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I do not know the right way to import modules.\nI have a main file which initializes the code, does some preliminary calculations etc.\nI also have 5 functions f1, f2, ... f5. The main code and all functions need Numpy. \nIf I define all functions in the main file, the code runs fine.\n(Importing with : import numpy as np)\nIf I put the functions in a separate file, I get an error:\nError : Global name 'linalg' is not defined. \nWhat is the right way to import modules such that the functions f1 - f5 can access the Numpy functionality?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":9291,"Q_Id":11788950,"Users Score":1,"Answer":"You have to import modules in every file in which you use them. Does that answer your question?","Q_Score":4,"Tags":"python,numpy,python-import","A_Id":11788967,"CreationDate":"2012-08-03T03:40:00.000","Title":"Importing Numpy into functions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that can output a plot using matplotlib and command line inputs.\nWhat i'm doing right now is making the script print the location\/filename of the generated plot image and then when PHP sees it, it outputs an img tag to display it.\nThe python script deletes images that are older than 20 minutes when it runs. It seems like too much of a workaround, and i'm wondering if there's a better solution.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2549,"Q_Id":11789917,"Users Score":2,"Answer":"You could modify your python script so it outputs an image (image\/jpeg) instead of saving it to a file. Then use the tag as normal, but pointing directly to the python script. Your php wouldn't call the python script at all, It would just include it as the src of the image.","Q_Score":2,"Tags":"php,python,matplotlib","A_Id":11790050,"CreationDate":"2012-08-03T05:42:00.000","Title":"What's a good way to output matplotlib graphs on a PHP website?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to generate a heat map image of a floor. I have the following things:\n\nA black & white .png image of the floor\nA three column array stored in Matlab.\n-- The first two columns indicate the X & Y coordinates of the floorpan image\n-- The third coordinate denotes the \"temperature\" of that particular coordinate\n\nI want to generate a heat map of the floor that will show the \"temperature\" strength in those coordinates. However, I want to display the heat map on top of the floor plan so that the viewers can see which rooms lead to which \"temperatures\".\nIs there any software that does this job? Can I use Matlab or Python to do this?\nThanks,\nNazmul","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":4036,"Q_Id":11805983,"Users Score":2,"Answer":"One way to do this would be:\n1) Load in the floor plan image with Matlab or NumPy\/matplotlib.\n2) Use some built-in edge detection to locate the edge pixels in the floor plan.\n3) Form a big list of (x,y) locations where an edge is found in the floor plan.\n4) Plot your heat map\n5) Scatterplot the points of the floor plan as an overlay.\nIt sounds like you know how to do each of these steps individually, so all you'll need to do is look up some stuff on how to overlay plots onto the same axis, which is pretty easy in both Matlab and matplotlib.\nIf you're unfamiliar, the right commands look at are things like meshgrid and surf, possibly contour and their Python equivalents. I think Matlab has a built-in for Canny edge detection. I believe this was more difficult in Python, but if you use the PIL library, the Mahotas library, the scikits.image library, and a few others tailored for image manipulation, it's not too bad. SciPy may actually have an edge filter by now though, so check there first.\nThe only sticking point will be if your (x,y) data for the temperature are not going to line up with the (x,y) pixel locations in the image. In that case, you'll have to play around with some x-scale factor and y-scale factor to transform your heat map's coordinates into pixel coordinates first, and then plot the heat map, and then the overlay should work.\nThis is a fairly low-tech way to do it; I assume you just need a quick and dirty plot to illustrate how something's working. This method does have the advantage that you can change the style of the floorplan points easily, making them larger, thicker, thinner, different colors, or transparent, depending on how you want it to interact with the heat map. However, to do this for real, use GIMP, Inkscape, or Photoshop and overlay the heatmap onto the image after the fact.","Q_Score":3,"Tags":"matlab,python-3.x,heatmap,color-mapping","A_Id":11809642,"CreationDate":"2012-08-04T04:49:00.000","Title":"Heat map generator of a floor plan image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I downloaded opencv 2.4 source code from svn, and then I used the command 'cmake -D BUILD_TESTS=OFF' to generate makefile and make and install. I found that the python module was successfully made. But when I import cv2 in python, no module cv2 exists. Is there anything else I should configure? Thanks for your help.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":329,"Q_Id":11824697,"Users Score":0,"Answer":"Have you run 'make install' or 'sudo make install'? While not absolutely necessary, it copies the generated binaries to your system paths.","Q_Score":0,"Tags":"python,opencv","A_Id":11826057,"CreationDate":"2012-08-06T08:15:00.000","Title":"Python For OpenCV2.4","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I downloaded opencv 2.4 source code from svn, and then I used the command 'cmake -D BUILD_TESTS=OFF' to generate makefile and make and install. I found that the python module was successfully made. But when I import cv2 in python, no module cv2 exists. Is there anything else I should configure? Thanks for your help.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":329,"Q_Id":11824697,"Users Score":2,"Answer":"You should either copy the cv2 library to a location in your PYTHONPATH or add your current directory to the PYTHONPATH.","Q_Score":0,"Tags":"python,opencv","A_Id":11824855,"CreationDate":"2012-08-06T08:15:00.000","Title":"Python For OpenCV2.4","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm puzzling over an embedded Python 2.7.2 interpreter issue. I've embedded the interpreter in a Visual C++ 2010 application and it essentially just calls user-written scripts.\nMy end-users want to use matplotlib - I've already resolved a number of issues relating to its dependence on numpy - but when they call savefig(), the application crashes with:\n**Fatal Python Error: PyEval_RestoreThread: NULL tstate\nThis isn't an issue running the same script using the standard Python 2.7.2 interpreter, even using the same site-packages, so it seems to definitely be something wrong with my embedding. I call Py_Initialize() - do I need to do something with setting up Python threads?\nI can't quite get the solution from other questions here to work, but I'm more concerned that this is symptomatic of a wider problem in how I'm setting up the Python interpreter.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2495,"Q_Id":11844628,"Users Score":3,"Answer":"Finally resolved this - so going to explain what occurred for the sake of Googlers!\nThis only happened when using third-party libraries like numpy or matplotlib, but actually related to an error elsewhere in my code. As part of the software I wrote, I was extending the Python interpreter following the same basic pattern as shown in the Python C API documentation.\nAt the end of this code, I called the Py_DECREF function on some of the Python objects I had created along the way. My mistake was that I was calling this function on borrowed references, which should not be done.\nThis caused the software to crash with the error above when it reached the Py_Finalize command that I used to clean up. Removing the DECREF on the borrowed references fixed this error.","Q_Score":1,"Tags":"python,c,matplotlib","A_Id":13551794,"CreationDate":"2012-08-07T11:08:00.000","Title":"Matplotlib with TkAgg error: PyEval_RestoreThread: null tstate on save_fig() - do I need threads enabled?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with survey data loaded from an h5-file as hdf = pandas.HDFStore('Survey.h5') through the pandas package. Within this DataFrame, all rows are the results of a single survey, whereas the columns are the answers for all questions within a single survey. \nI am aiming to reduce this dataset to a smaller DataFrame including only the rows with a certain depicted answer on a certain question, i.e. with all the same value in this column. I am able to determine the index values of all rows with this condition, but I can't find how to delete this rows or make a new df with these rows only.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":92185,"Q_Id":11881165,"Users Score":0,"Answer":"If you just need to get the top rows; you can use df.head(10)","Q_Score":46,"Tags":"python,pandas,slice","A_Id":71602253,"CreationDate":"2012-08-09T10:15:00.000","Title":"Slice Pandas DataFrame by Row","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there an equivalent MATLAB function for the range() function in Python?\nI'd really like to be able to type something like range(-10, 11, 5) and get back [-10, -5, 0, 5, 10] instead of having to write out the entire range by hand.","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":21375,"Q_Id":11890437,"Users Score":15,"Answer":"Yes, there is the : operator. The command -10:5:11 would produce the vector [-10, -5, 0, 5, 10];","Q_Score":10,"Tags":"python,matlab","A_Id":11890470,"CreationDate":"2012-08-09T19:17:00.000","Title":"Is there an equivalent of the Python range function in MATLAB?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two large 2-d arrays and I'd like to find their set difference taking their rows as elements. In Matlab, the code for this would be setdiff(A,B,'rows'). The arrays are large enough that the obvious looping methods I could think of take too long.","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":13283,"Q_Id":11903083,"Users Score":-1,"Answer":"I'm not sure what you are going for, but this will get you a boolean array of where 2 arrays are not equal, and will be numpy fast:\n\nimport numpy as np\na = np.random.randn(5, 5)\nb = np.random.randn(5, 5)\na[0,0] = 10.0\nb[0,0] = 10.0 \na[1,1] = 5.0\nb[1,1] = 5.0\nc = ~(a-b==0)\nprint c\n[[False True True True True]\n [ True False True True True]\n [ True True True True True]\n [ True True True True True]\n [ True True True True True]]","Q_Score":18,"Tags":"python,numpy,set-difference","A_Id":11903766,"CreationDate":"2012-08-10T13:50:00.000","Title":"Find the set difference between two large arrays (matrices) in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was bummed out to see that scikit-learn does not support Python 3...Is there a comparable package anyone can recommend for Python 3?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3322,"Q_Id":11910481,"Users Score":0,"Answer":"Old question, Scikit-Learn is supported by Python3 now.","Q_Score":19,"Tags":"python,python-3.x,machine-learning,scikit-learn","A_Id":58766819,"CreationDate":"2012-08-10T23:24:00.000","Title":"Best Machine Learning package for Python 3x?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for some methods for detecting movement. I've tried two of them. One method is to have background frame that has been set on program start and other frames are compared (threshold) to that background frame. Other method is to compare current frame (let's call that frame A) with frame-1 (frame before A). None of these methods are great. I want to know other methods that work better.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":637,"Q_Id":11925782,"Users Score":1,"Answer":"Please go through the book Learning OpenCV: Computer Vision with the OpenCV Library\nIt has theory as well as example codes.","Q_Score":0,"Tags":"python,opencv","A_Id":11926117,"CreationDate":"2012-08-12T20:48:00.000","Title":"What are some good methods for detecting movement using a camera? (opencv)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have a video with 3 green spots on it. These spots have a bunch of \"good features to track\" around their perimeter.\nThe spots are very far away from each other so using KMeans I am easily able to identify them as separate clusters.\nThe problem comes in that the ordering of the clusters changes from frame to frame. In one frame a particular cluster is the first in the output list. In the next cluster it is the second in the output list.\nIt is making for a difficult time measuring angles.\nHas anyone come across this or can think of a fix other than writing extra code to compare each list to the list of the previous frame?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":232,"Q_Id":11944796,"Users Score":1,"Answer":"Since k-means is a randomized approach, you will probably encounter this problem even when analyzing the same frame multiple times.\nTry to use the previous frames cluster centers as initial centers for k-means. This may make the ordering stable enough for you, and it may even significantly speed up k-means (assuming that the green spots don't move too fast).\nAlternatively, just try reordering the means so that they are closest to the previous images means.","Q_Score":2,"Tags":"python,opencv,cluster-analysis,k-means","A_Id":12489308,"CreationDate":"2012-08-14T02:02:00.000","Title":"cv.KMeans2 clustering indices inconsistent","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to understand the effect of resize() function on numpy array vs. an h5py dataset. In my application, I am reading a text file line by line and then after parsing the data, write into an hdf5 file. What would be a good approach to implement this. Should I add each new row into a numpy array and keep resizing (increasing the axis) for numpy array (eventually writing the complete numpy array into h5py dataset) or should I just add each new row data into h5py dataset directly and thus resizing the h5py dataset in memory. How does resize() function affects the performance if we keep resizing after each row? Or should I resize after every 100 or 1000 rows? \nThere can be around 200,000 lines in each dataset. \nAny help is appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1987,"Q_Id":11979316,"Users Score":1,"Answer":"NumPy arrays are not designed to be resized. It's doable, but wasteful in terms of memory (because you need to create a second array larger than your first one, then fill it with your data... That's two arrays you have to keep) and of course in terms of time (creating the temporary array).\nYou'd be better off starting with lists (or regular arrays, as suggested by @HYRY), then convert to ndarrays when you have a chunk big enough. \nThe question is, when do you need to do the conversion ?","Q_Score":3,"Tags":"python,numpy,h5py","A_Id":11998662,"CreationDate":"2012-08-16T00:57:00.000","Title":"efficient way to resize numpy or dataset?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating.\nI cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms.\nCan anyone suggest how I can get a regression algorithm to operate with complex numbers as targets?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":2406,"Q_Id":11999147,"Users Score":2,"Answer":"So far I discovered that most classifiers, like linear regressors, will automatically convert complex numbers to just the real part.\nkNN and RadiusNN regressors, however, work well - since they do a weighted average of the neighbor labels and so handle complex numbers gracefully.\nUsing a multi-target classifier is another option, however I do not want to decouple the x and y directions since that may lead to unstable solutions as Colonel Panic mentions, when both results come out close to 0.\nI will try other classifiers with complex targets and update the results here.","Q_Score":7,"Tags":"python,numpy,scipy,scikit-learn","A_Id":12011024,"CreationDate":"2012-08-17T02:48:00.000","Title":"Is it possible to use complex numbers as target labels in scikit learn?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating.\nI cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms.\nCan anyone suggest how I can get a regression algorithm to operate with complex numbers as targets?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":2406,"Q_Id":11999147,"Users Score":1,"Answer":"Good question. How about transforming angles into a pair of labels, viz. x and y co-ordinates. These are continuous functions of angle (cos and sin). You can combine the results from separate x and y classifiers for an angle? $\\theta = \\sign(x) \\arctan(y\/x)$. However that result will be unstable if both classifiers return numbers near zero.","Q_Score":7,"Tags":"python,numpy,scipy,scikit-learn","A_Id":12003586,"CreationDate":"2012-08-17T02:48:00.000","Title":"Is it possible to use complex numbers as target labels in scikit learn?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating.\nI cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms.\nCan anyone suggest how I can get a regression algorithm to operate with complex numbers as targets?","AnswerCount":3,"Available Count":3,"Score":0.2605204458,"is_accepted":false,"ViewCount":2406,"Q_Id":11999147,"Users Score":4,"Answer":"Several regressors support multidimensional regression targets. Just view the complex numbers as 2d points.","Q_Score":7,"Tags":"python,numpy,scipy,scikit-learn","A_Id":12004759,"CreationDate":"2012-08-17T02:48:00.000","Title":"Is it possible to use complex numbers as target labels in scikit learn?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have two classes, say Manager and Graph, where each Graph has a reference to its manager, and each Manager has references to a collection of graphs that it owns. I want to be able to do two things\n1) Copy a graph, which performs a deepcopy except that the new graph references the same manager as the old one.\n2) Copy a manager, which creates a new manager and also copies all the graphs it owns.\nWhat is the best way to do this? I don't want to have to roll my own deepcopy implementation, but the standard copy.deepcopy doesn't appear to provide this level of flexibility.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1009,"Q_Id":12014042,"Users Score":1,"Answer":"If there are no other objects referenced in graph (just simple fields), then copy.copy(graph) should make a copy, while copy.deepcopy(manager) should copy the manager and its graphs, assuming there is a list such as manager.graphs.\nBut in general you are right, the copy module does not have this flexibility, and for slightly fancy situations you'd probably need to roll your own.","Q_Score":2,"Tags":"python,deep-copy","A_Id":12014071,"CreationDate":"2012-08-17T22:34:00.000","Title":"Python muliple deepcopy behaviors","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working on a project which involves performing a lot of statistical calculations on many relatively small datasets. Some of these calculations are as simple as computing a moving average, while others involve slightly more work, like Spearman's Rho or Kendell's Tau calculations. \nThe datasets are essentially a series of arrays packed into a dictionary, whose keys relate to a document id in MongoDb that provides further information about the subset. Each array in the dictionary has no more than 100 values. The dictionaries, however, may be infinitely large. In all reality however, around 150 values are added each year to the dictionary.\nI can use mapreduce to perform all of the necessary calculations. Alternately, I can use Celery and RabbitMQ on a distributed system, and perform the same calculations in python.\nMy question is this: which avenue is most recommended or best-practice?\nHere is some additional information:\n\nI have not benchmarked anything yet, as I am just starting the process of building the scripts to compute the metrics for each dataset. \nUsing a celery\/rabbitmq distributed queue will likely increase the number of queries made against the Mongo database. \nI do not envision the memory usage of either method being a concern, unless the number of simultaneous tasks is very large. The majority of the tasks themselves are merely taking an item within a dataset, loading it, doing a calculation, and then releasing it. So even if the amount of data in a dataset is very large, not all of it will be loaded into memory at one time. Thus, the limiting factor, in my mind, comes down to the speed at which mapreduce or a queued system can perform the calculations. Additionally, it is dependent upon the number of concurrent tasks.\n\nThanks for your help!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":944,"Q_Id":12045278,"Users Score":0,"Answer":"It's impossible to say without benchmarking for certain, but my intuition leans toward doing more calculations in Python rather than mapreduce. My main concern is that mapreduce is single-threaded: One MongoDB process can only run one Javascript function at a time. It can, however, serve thousands of queries simultaneously, so you can take advantage of that concurrency by querying MongoDB from multiple Python processes.","Q_Score":1,"Tags":"python,mongodb,mapreduce,celery,distributed-computing","A_Id":12079563,"CreationDate":"2012-08-20T21:13:00.000","Title":"Data analysis using MapReduce in MongoDb vs a Distributed Queue using Celery & RabbitMq","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The Numpy 'modulus' function is used in a code to check if a certain time is an integral multiple of the time-step.\nBut some weird behavior is seeen. \n\nnumpy.mod(121e-12,1e-12) returns 1e-12\nnumpy.mod(60e-12,1e-12) returns 'a very small value' (compared to 1e-12).\n\nIf you play around numpy.mode('122-126'e-12,1e-12) it gives randomly 0 and 1e-12.\nCan someone please explain why?\nThanks much","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4240,"Q_Id":12150513,"Users Score":0,"Answer":"According to the doc, np.mod(x1,x2)=x1-floor(x1\/x2)*x2. The problem here is that you are working with very small values, a dark domain where floating point errors (truncation...) happen quite often and results are often unpredictable...\nI don't think you should spend a lot of time worrying about that.","Q_Score":0,"Tags":"python,numpy,scipy","A_Id":12150625,"CreationDate":"2012-08-27T22:31:00.000","Title":"Numpy\/Scipy modulus function","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a text file with five columns. First column has year(2011 to 2040), 2nd has Tmax, 3rd has Tmin, 4th has Precip, and fifth has Solar for 30 years. I would like to write a python code which shuffles the first column (year) 10 times with remaining columns having the corresponding original values in them, that is: I want to shuffle year columns only for 10 times so that year 1 will have the corresponding values.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1120,"Q_Id":12150908,"Users Score":0,"Answer":"Are you familiar with NumPy ? Once you have your data in a numpy ndarray, it's a breeze to shuffle the rows while keeping the column orders, without the hurdle of creating many temporaries.\nYou could use a function like np.genfromtxt to read your data file and create a ndarray with different named fields. You could then use the np.random.shuffle function to reorganize the rows.","Q_Score":2,"Tags":"python-2.7","A_Id":12151167,"CreationDate":"2012-08-27T23:27:00.000","Title":"How do I shuffle in Python for a column (years) with keeping the corrosponding column values?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for the fastest algorithm\/package i could use to compute the null space of an extremely large (millions of elements, and not necessarily square) matrix. Any language would be alright, preferably something in Python\/C\/C++\/Java. Your help would be greatly appreciated!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":766,"Q_Id":12161182,"Users Score":2,"Answer":"The manner to avoid trashing CPU caches greatly depends on how the matrix is stored\/loaded\/transmitted, a point that you did not address.\nThere are a few generic recommendations:\n\ndivide the problem into worker threads addressing contiguous rows per threads\nincrement pointers (in C) to traverse rows and keep the count on a per-thread basis\nconsolidate the per-thread results at the end of all worker threads.\n\nIf your matrix cells are made of bits (instead of bytes, ints, or arrays) then you can read words (either 4-byte or 8-byte on 32-bit\/64-bit platforms) to speedup the count.\nThere are too many questions left unanswered in the problem description to give you any further guidance.","Q_Score":0,"Tags":"c++,python,c,algorithm,matrix","A_Id":12161433,"CreationDate":"2012-08-28T14:12:00.000","Title":"Computing the null space of a large matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for the fastest algorithm\/package i could use to compute the null space of an extremely large (millions of elements, and not necessarily square) matrix. Any language would be alright, preferably something in Python\/C\/C++\/Java. Your help would be greatly appreciated!","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":766,"Q_Id":12161182,"Users Score":-1,"Answer":"In what kind of data structure is your matrix represented? \nIf you use an element list to represent the matrix, i.e. \"column, row, value\" tuple for one matrix element, then the solution would be just count the number of the tuples (subtracted by the matrix size)","Q_Score":0,"Tags":"c++,python,c,algorithm,matrix","A_Id":12161500,"CreationDate":"2012-08-28T14:12:00.000","Title":"Computing the null space of a large matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two pandas arrays, A and B, that result from groupby operations. A has a 2-level multi-index consisting of both quantile and date. B just has an index for date.\nBetween the two of them, the date indices match up (within each quantile index for A).\nIs there a standard Pandas function or idiom to \"broadcast\" B such that it will have an extra level to its multi-index that matches the first multi-index level of A?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1335,"Q_Id":12167324,"Users Score":2,"Answer":"If you just want to do simple arithmetic operations, I think something like A.div(B, level='date') should work.\nAlternatively, you can do something like B.reindex(A.index, level='date') to manually match the indices.","Q_Score":6,"Tags":"python,arrays,pandas,multi-index","A_Id":12170479,"CreationDate":"2012-08-28T20:49:00.000","Title":"How to broadcast to a multiindex","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use matplotlib from my Sublime Text 2 directly via the build command.\nDoes anybody know how I accomplish that? I'm really confused about the whole multiple python installations\/environments. Google didn't help.\nMy python is installed via homebrew and in my terminal (which uses brew python), I have no problem importing matplotlib from there. But Sublime Text shows me an import Error (No module named matplotlib.pyplot).\nI have installed Matplotlib via EPD free. The main matplotlib .dmg installer refused to install it on my disk, because no system version 2.7 was found. I have given up to understand the whole thing. I just want it to work.\nAnd, I have to say, for every bit of joy python brings with it, the whole thing with installations and versions and path, environments is a real hassle. \nBeneath a help for this specific problem I would appreciate any helpful link to understand and this environment mess.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3245,"Q_Id":12173541,"Users Score":5,"Answer":"I had the same problem and the following fix worked for me:\n1 - Open Sublime Text 2 -> Preferences -> Browse Packages\n2 - Go to the Python folder, select file Python.sublime-build\n3 - Replace the existing cmd line for this one:\n\n\"cmd\": [\"\/Library\/Frameworks\/Python.framework\/Versions\/Current\/bin\/python\", \"$file\"],\n\nThen click CMD+B and your script with matplotlib stuff will work.","Q_Score":3,"Tags":"python,matplotlib,sublimetext2,sublimetext","A_Id":12610165,"CreationDate":"2012-08-29T08:16:00.000","Title":"MatPlotLib with Sublime Text 2 on OSX","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Gaussian Mixture Model from python scikit-learn package to train my dataset , however , I fount that when I code\n-- G=mixture.GMM(...)\n-- G.fit(...)\n-- G.score(sum feature)\nthe resulting log probability is positive real number... why is that?\nisn't log probability guaranteed to be negative?\nI get it. what Gaussian Mixture Model returns to us i the log probability \"density\" instead of probability \"mass\" so positive value is totally reasonable.\nIf the covariance matrix is near to singular, then the GMM will not perfomr well, and generally it means the data is not good for such generative task","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5301,"Q_Id":12175404,"Users Score":13,"Answer":"Positive log probabilities are okay.\nRemember that the GMM computed probability is a probability density function (PDF), so can be greater than one at any individual point.\nThe restriction is that the PDF must integrate to one over the data domain.\nIf the log probability grows very large, then the inference algorithm may have reached a degenerate solution (common with maximum likelihood estimation if you have a small dataset).\nTo check that the GMM algorithm has not reached a degenerate solution, you should look at the variances for each component. If any of the variances is close to zero, then this is bad. As an alternative, you should use a Bayesian model rather than maximum likelihood estimation (if you aren't doing so already).","Q_Score":7,"Tags":"python,machine-learning,scikit-learn,mixture-model","A_Id":12199026,"CreationDate":"2012-08-29T10:01:00.000","Title":"scikit-learn GMM produce positive log probability","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to write UDFs in Apache Pig. I'll be using Python UDFs. \nMy issue is I have tons of data to analyse and need packages like NumPy and SciPy. Buy this they dont have Jython support I cant use them along with Pig. \nDo we have a substitue ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":524,"Q_Id":12183759,"Users Score":0,"Answer":"You can stream through a (C)Python script that imports scipy.\nI am for instance using this to cluster data inside bags, using import scipy.cluster.hierarchy","Q_Score":1,"Tags":"python,numpy,scipy,apache-pig","A_Id":12618627,"CreationDate":"2012-08-29T17:55:00.000","Title":"Using Numpy and SciPy on Apache Pig","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to optimize (memorywise the multiplication of X and its transpose X'\nDoes anyone know if numpys matrix multiplication takes into consideration that X' is just the transpose of X. What I mean is that if it detects this and therfore does not create the object X' but just works on the cols\/rows of X to produce the product? Thank you for any help on this!\nJ.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":505,"Q_Id":12185117,"Users Score":3,"Answer":"In numpy convention, the transpose of X is represented byX.T and you're in luck, X.T is just a view of the original array X, meaning that no copy is done.","Q_Score":1,"Tags":"python,matrix,numpy,python-2.7,matrix-multiplication","A_Id":12185246,"CreationDate":"2012-08-29T19:26:00.000","Title":"Python - Numpy matrix multiplication","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can't find any reference on funcionality to perform Johansen cointegration test in any Python module dealing with statistics and time series analysis (pandas and statsmodel). Does anybody know if there's some code around that can perform such a test for cointegration among time series?","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":18067,"Q_Id":12186994,"Users Score":9,"Answer":"statsmodels doesn't have a Johansen cointegration test. And, I have never seen it in any other python package either.\nstatsmodels has VAR and structural VAR, but no VECM (vector error correction models) yet.\nupdate: \nAs Wes mentioned, there is now a pull request for Johansen's cointegration test for statsmodels. I have translated the matlab version in LeSage's spatial econometrics toolbox and wrote a set of tests to verify that we get the same results. \nIt should be available in the next release of statsmodels.\nupdate 2:\nThe test for cointegration coint_johansen was included in statsmodels 0.9.0 together with the vector error correction models VECM.\n(see also 3rd answer)","Q_Score":13,"Tags":"python,statistics,pandas,statsmodels","A_Id":12187770,"CreationDate":"2012-08-29T21:47:00.000","Title":"Johansen cointegration test in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7. \nI am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools. Which version, 2.6 or 2.7, would be better for this?\nThis would also clear up whether or not to use optparse when making my scripts. \nEdit: I am working at a university and the workstation I picked up had Python 2.4. Picking between 2.6 and 2.7 determines which distro to upgrade to. Thanks for the advice!","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":169,"Q_Id":12187115,"Users Score":1,"Answer":"I personally use Debian stable for my own projects so naturally I gravitate toward what the distribution uses as the default Python installation. For Squeeze (current stable), it's 2.6.6 but Wheezy will use 2.7.\nWhy is this relevant? Well, as a programmer there are a number of times I wish I had access to new features from more recent versions of Python, but Debian in general is so conservative that I find it's a good metric of covering wider audience who may be running an older OS.\nSince Wheezy probably will become stable by the end of the year (or earlier next year), I'll be moving to 2.7 as well.","Q_Score":2,"Tags":"python,numpy,version,scipy,optparse","A_Id":12205078,"CreationDate":"2012-08-29T21:59:00.000","Title":"Open Source Scientific Project - Use Python 2.6 or 2.7?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7. \nI am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools. Which version, 2.6 or 2.7, would be better for this?\nThis would also clear up whether or not to use optparse when making my scripts. \nEdit: I am working at a university and the workstation I picked up had Python 2.4. Picking between 2.6 and 2.7 determines which distro to upgrade to. Thanks for the advice!","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":169,"Q_Id":12187115,"Users Score":2,"Answer":"If you intend to distribute this code, your answer depends on your target audience, actually. A recent stint in some private sector research lab showed me that Python 2.5 is still often use. \nAnother example: EnSight, a commercial package for 3D visualization\/manipulation, ships with Python 2.5 (and NumPy 1.3 or 1.4, if I'm not mistaken).\nFor a personal project, I'd shoot for 2.7. For a larger audience, I'd err towards 2.6.","Q_Score":2,"Tags":"python,numpy,version,scipy,optparse","A_Id":12187327,"CreationDate":"2012-08-29T21:59:00.000","Title":"Open Source Scientific Project - Use Python 2.6 or 2.7?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7. \nI am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools. Which version, 2.6 or 2.7, would be better for this?\nThis would also clear up whether or not to use optparse when making my scripts. \nEdit: I am working at a university and the workstation I picked up had Python 2.4. Picking between 2.6 and 2.7 determines which distro to upgrade to. Thanks for the advice!","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":169,"Q_Id":12187115,"Users Score":9,"Answer":"If everything you need would work with 2.7 I would use it, no point staying with 2.6. Also, .format() works a bit nicer (no need to specify positions in the {} for the arguments to the formatting directives). \nFWIW, I usually use 2.7 or 3.2 and every once in a while I end up porting some code to my Linux box which still runs 2.6.5 and the format() thing is annoying enough :)\n2.7 has been around enough to be supported well - and 3.x is hopefully getting there too.","Q_Score":2,"Tags":"python,numpy,version,scipy,optparse","A_Id":12187140,"CreationDate":"2012-08-29T21:59:00.000","Title":"Open Source Scientific Project - Use Python 2.6 or 2.7?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a device that is connected to my Mac via bluetooth. I would like to use R (or maybe Python, but R is preferred) to read the data real-time and process it. Does anyone know how I can do the data streaming using R on a Mac?\nCheers","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":377,"Q_Id":12187795,"Users Score":0,"Answer":"there is a strong probability that you can enumerate the bluetooth as serial port for the bluetooth and use pyserial module to communicate pretty easily...\nbut if this device does not enumerate serially you will have a very large headache trying to do this... \nsee if there are any com ports that are available if there are its almost definitely enumerating as a serial connection","Q_Score":0,"Tags":"python,macos,r,bluetooth","A_Id":12187989,"CreationDate":"2012-08-29T23:16:00.000","Title":"How can I stream data, on my Mac, from a bluetooth source using R?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When get_range_slice returns, in what order are the columns returned? Is it random or the order in which the columns were created? Is it best practice to iterate through all resulting columns for each row and compare the column name prior to using the value or can one just index into the returning array?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":161,"Q_Id":12212321,"Users Score":4,"Answer":"The columns for each row will be returned in sorted order, sorted by the column key, depending on you comparator_type. The row ordering will depend on your partitioner, and if you use the random partitioner, the rows will come back in a 'random' order.\nIn Cassandra, it is possible for each row to have a different set of columns, so you should really read the column key before using the value. This will depend on the data you have inserted into you cluster.","Q_Score":2,"Tags":"c#,java,c++,python,cassandra","A_Id":12212637,"CreationDate":"2012-08-31T09:14:00.000","Title":"Cassandra get_range_slice","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been experimenting on making a random map for a top down RPG I am making in Python. (and Pyglet) So far I have been making island by starting at 0,0 and going in a random direction 500 times (x+=32 or y -=32 sort of thing) However this doesn't look like a real image very much so I had a look at the Perlin Noise approach. How would I get a randomly generated map out of this :\/ (preferably an island) and is it better than the random direction method?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5966,"Q_Id":12232901,"Users Score":0,"Answer":"You could also use 1d perlin noise to calculate the radius from each point to the \"center\" of the island. It should be really easy to implement, but it will make more circular islands, and won't give each point different heights.","Q_Score":1,"Tags":"python,pyglet,terrain,perlin-noise","A_Id":14346374,"CreationDate":"2012-09-02T02:01:00.000","Title":"How to make a 2d map with perlin noise python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am die hard fan of Artificial intelligence and machine learning. I don't know much about them but i am ready to learn. I am currently a web programmer in PHP , and I am learning python\/django for a website.\nNow as this AI field is very wide and there are countless algorithms I don't know where to start.\nBut eventually my main target is to use whichever algorithms; like Genetic Algorithms , Neural networks , optimization which can be programmed in web application to show some stuff.\nFor Example : Recommendation of items in amazon.com\nNow what I want is that in my personal site I have the demo of each algorithm where if I click run and I can show someone what this algorithm can do.\nSo can anyone please guide which algorithms should I study for web based applications.\nI see lot of example in sci-kit python library but they are very calculation and graph based.\nI don't think I can use them from web point of view.\nAny ideas how should I go?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1093,"Q_Id":12242054,"Users Score":1,"Answer":"I assume you are mostly concerned with a general approach to implementing AI in a web context, and not in the details of the AI algorithms themselves. Any computable algorithm can be implemented in any turing complete language (i.e.all modern programming languages). There's no special limitations for what you can do on the web, it's just a matter of representation, and keeping track of session-specific data, and shared data. Also, there is no need to shy away from \"calculation\" and \"graph based\" algorithms; most AI-algorithms will be either one or the other (or indeed both) - and that's part of the fun.\nFor example, as an overall approach for a neural net, you could:\n\nImplement a standard neural network using python classes\nPossibly train the set with historical data\nLoad the state of the net on each request (i.e. from a pickle)\nFeed a part of the request string (i.e. a product-ID) to the net, and output the result (i.e. a weighted set of other products, like \"users who clicked this, also clicked this\")\nAlso, store the relevant part of the request (i.e. the product-ID) in a session variable (i.e. \"previousProduct\"). When a new request (i.e. for another product) comes in from the same user, strengthen\/create the connection between the first product and the next.\nSave the state of the net between each request (i.e. back to pickle)\n\nThat's just one, very general example. But keep in mind - there is nothing special about web-programming in this context, except keeping track of session-specific data, and shared data.","Q_Score":0,"Tags":"python,web,machine-learning,artificial-intelligence","A_Id":12243670,"CreationDate":"2012-09-03T04:37:00.000","Title":"What algorithms i can use from machine learning or Artificial intelligence which i can show via web site","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like to integrate a function in python and provide the probability density (measure) used to sample values. If it's not obvious, integrating f(x)dx in [a,b] implicitly use the uniform probability density over [a,b], and I would like to use my own probability density (e.g. exponential).\nI can do it myself, using np.random.* but then \n\nI miss the optimizations available in scipy.integrate.quad. Or maybe all those optimizations assume the uniform density?\nI need to do the error estimation myself, which is not trivial. Or maybe it is? Maybe the error is just the variance of sum(f(x))\/n?\n\nAny ideas?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":545,"Q_Id":12245859,"Users Score":0,"Answer":"Another possibilty would be to integrate x -> f( H(x)) where H is the inverse of the cumulative distribution of your probability distribtion. \n[This is because of change of variable: replacing y=CDF(x) and noting that p(x)=CDF'(x) yields the change dy=p(x)dx and thus int{f(x)p(x)dx}==int{f(x)dy}==int{f(H(y))dy with H the inverse of CDF.]","Q_Score":1,"Tags":"python,numpy,scipy,probability,numerical-methods","A_Id":12283724,"CreationDate":"2012-09-03T10:14:00.000","Title":"Integrating a function using non-uniform measure (python\/scipy)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to integrate a function in python and provide the probability density (measure) used to sample values. If it's not obvious, integrating f(x)dx in [a,b] implicitly use the uniform probability density over [a,b], and I would like to use my own probability density (e.g. exponential).\nI can do it myself, using np.random.* but then \n\nI miss the optimizations available in scipy.integrate.quad. Or maybe all those optimizations assume the uniform density?\nI need to do the error estimation myself, which is not trivial. Or maybe it is? Maybe the error is just the variance of sum(f(x))\/n?\n\nAny ideas?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":545,"Q_Id":12245859,"Users Score":0,"Answer":"Just for the sake of brevity, 3 ways were suggested for calculating the expected value of f(x) under the probability p(x):\n\nAssuming p is given in closed-form, use scipy.integrate.quad to evaluate f(x)p(x)\nAssuming p can be sampled from, sample N values x=P(N), then evaluate the expected value by np.mean(f(X)) and the error by np.std(f(X))\/np.sqrt(N)\nAssuming p is available at stats.norm, use stats.norm.expect(f)\nAssuming we have the CDF(x) of the distribution rather than p(x), calculate H=Inverse[CDF] and then integrate f(H(x)) using scipy.integrate.quad","Q_Score":1,"Tags":"python,numpy,scipy,probability,numerical-methods","A_Id":12268227,"CreationDate":"2012-09-03T10:14:00.000","Title":"Integrating a function using non-uniform measure (python\/scipy)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how can I clear a complete csv file with python. Most forum entries that cover the issue of deleting row\/columns basically say, write the stuff you want to keep into a new file. I need to completely clear a file - how can I do that?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":41353,"Q_Id":12277864,"Users Score":1,"Answer":"The Python csv module is only for reading and writing whole CSV files but not for manipulating them. If you need to filter data from file then you have to read it, create a new csv file and write the filtered rows back to new file.","Q_Score":8,"Tags":"python,csv","A_Id":12277912,"CreationDate":"2012-09-05T09:02:00.000","Title":"python clear csv file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using matplotlib to draw a bar chart with many different colors. I also draw a number of markers on the plot with scatter. \nSince I am already using many different colors for the bars, I do not want to use a separate contrasting color for the marks, as that would add a big limit to the color space I can choose my bars from.\nTherefore, the question is whether it is possible to have scatter draw marks, not with a given color, but with a color that is the inverse of the color that happens to be behind any given mark, wherever it is placed. \nAlso, note that the marks may fully overlap bars, partly overlap bars, or not overlap a bar at all.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":385,"Q_Id":12350693,"Users Score":0,"Answer":"I post here a schematic approach how to solve your problem with out any real python code, it might help though.\nWhen actually plotting you need to store all in some kind of two lists, which will enable you to access them later. \n\nFor each element, bar and marker you can get the color.\nFor each marker you can find if it overlapping or inside a bar, for example you could usenxutils.pntpoly() [point in polygon test] from matplotlib itself.\nNow you can decide on the best color. If you know the color of the bar in RGB format you can\ncalculate the completing color of the marker using some simple rules you can define.\nWhen you got the color use the method set_color() or the appropriate method the object has.","Q_Score":1,"Tags":"python,matplotlib","A_Id":12363743,"CreationDate":"2012-09-10T11:25:00.000","Title":"Inverted color marks in matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have data that needs to stay in the exact sequence it is entered in (genome sequencing) and I want to search approximately one billion nodes of around 18 members each to locate patterns.\nObviously speed is an issue with this large of a data set, and I actually don't have any data that I can currently use as a discrete key, since the basis of the search is to locate and isolate (but not remove) duplicates.\nI'm looking for an algorithm that can go through the data in a relatively short amount of time to locate these patterns and similarities, and I can work out the regex expressions for comparison, but I'm not sure how to get a faster search than O(n).\nAny help would be appreciated.\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":386,"Q_Id":12470094,"Users Score":0,"Answer":"probably what you want is called \"de novo assembly\"\nan approach would be to calculate N-mers, and use these in an index\nnmers will become more important if you need partial matches \/ mismatches\nif billion := 1E9, python might be too weak\nalso note that 18 bases* 2 bits := 36 bits of information to enumerate them. That is tentavely close to 32 bits and could fit into 64 bits. hashing \/ bitfiddling might be an option","Q_Score":1,"Tags":"python,sql,dna-sequence,genome","A_Id":12474645,"CreationDate":"2012-09-18T03:43:00.000","Title":"Fast algorithm comparing unsorted data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use imshow function with interpolation='nearest' on a grayscale image and get a nice color picture as a result, looks like it does some sort of color segmentation for me, what exactly is going on there? \nI would also like to get something like this for image processing, is there some function on numpy arrays like interpolate('nearest') out there?\nEDIT: Please correct me if I'm wrong, it looks like it does simple pixel clustering (clusters are colors of the corresponding colormap) and the word 'nearest' says that it takes the nearest colormap color (probably in the RGB space) to decide to which cluster the pixel belongs.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":22970,"Q_Id":12473511,"Users Score":25,"Answer":"interpolation='nearest' simply displays an image without trying to interpolate between pixels if the display resolution is not the same as the image resolution (which is most often the case). It will result an image in which pixels are displayed as a square of multiple pixels.\nThere is no relation between interpolation='nearest' and the grayscale image being displayed in color. By default imshow uses the jet colormap to display an image. If you want it to be displayed in greyscale, call the gray() method to select the gray colormap.","Q_Score":17,"Tags":"python,image-processing,numpy,matplotlib","A_Id":12473913,"CreationDate":"2012-09-18T08:52:00.000","Title":"What does matplotlib `imshow(interpolation='nearest')` do?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to perform some array calculations using NumPy for a view callable in Pyramid. The array I'm using is quite large (3500x3500), so I'm wondering where the best place to load it is for repeated use.\nRight now my application is a single page and I am using a single view callable.\nThe array will be loaded from disk and will not change.","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":409,"Q_Id":12497545,"Users Score":3,"Answer":"If the array is something that can be shared between threads then you can store it in the registry at application startup (config.registry['my_big_array'] = ??). If it cannot be shared then I'd suggest using a queuing system with workers that can always have the data loaded, probably in another process. You can hack this by making the value in the registry be a threadlocal and then storing a new array in the variable if one is not there already, but then you will have a copy of the array per thread and that's really not a great idea for something that large.","Q_Score":1,"Tags":"python,numpy,pyramid","A_Id":12497790,"CreationDate":"2012-09-19T15:09:00.000","Title":"Using NumPy in Pyramid","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to perform some array calculations using NumPy for a view callable in Pyramid. The array I'm using is quite large (3500x3500), so I'm wondering where the best place to load it is for repeated use.\nRight now my application is a single page and I am using a single view callable.\nThe array will be loaded from disk and will not change.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":409,"Q_Id":12497545,"Users Score":2,"Answer":"I would just load it in the obvious place in the code, where you need to use it (in your view, I guess?) and see if you have performance problems. It's better to work with actual numbers than try to guess what's going to be a problem. You'll usually be surprised by the reality.\nIf you do see performance problems, assuming you don't need a copy for each of multiple threads, try just loading it in the global scope after your imports. If that doesn't work, try moving it into its own module and importing that. If that still doesn't help... I don't know what then.","Q_Score":1,"Tags":"python,numpy,pyramid","A_Id":12497850,"CreationDate":"2012-09-19T15:09:00.000","Title":"Using NumPy in Pyramid","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"It would be useful to save the session variables which could be loaded easily into memory at a later stage.","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":10066,"Q_Id":12504951,"Users Score":1,"Answer":"There is also a magic command, history, that can be used to write all the commands\/statements given by user.\nSyntax : %history -f file_name.\nAlso %save file_name start_line-end_line, where star_line is the starting line number and end_line is ending line number. Useful in case of selective save.\n%run can be used to execute the commands in the saved file","Q_Score":22,"Tags":"python,ipython,pandas","A_Id":42903054,"CreationDate":"2012-09-20T01:20:00.000","Title":"Save session in IPython like in MATLAB?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have data points in x,y,z format. They form a point cloud of a closed manifold. How can I interpolate them using R-Project or Python? (Like polynomial splines)","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1960,"Q_Id":12534813,"Users Score":1,"Answer":"By \"compact manifold\" do you mean a lower dimensional function like a trajectory or a surface that is embedded in 3d? You have several alternatives for the surface-problem in R depending on how \"parametric\" or \"non-parametric\" you want to be. Regression splines of various sorts could be applied within the framework of estimating mean f(x,y) and if these values were \"tightly\" spaced you may get a relatively accurate and simple summary estimate. There are several non-parametric methods such as found in packages 'locfit', 'akima' and 'mgcv'. (I'm not really sure how I would go about statistically estimating a 1-d manifold in 3-space.) \nEdit: But if I did want to see a 3D distribution and get an idea of whether is was a parametric curve or trajectory, I would reach for package:rgl and just plot it in a rotatable 3D frame.\nIf you are instead trying to form the convex hull (for which the word interpolate is probably the wrong choice), then I know there are 2-d solutions and suspect that searching would find 3-d solutions as well. Constructing the right search strategy will depend on specifics whose absence the 2 comments so far reflects. I'm speculating that attempting to model lower and higher order statistics like the 1st and 99th percentile as a function of (x,y) could be attempted if you wanted to use a regression effort to create boundaries. There is a quantile regression package, 'rq' by Roger Koenker that is well supported.","Q_Score":2,"Tags":"python,r,3d,interpolation,splines","A_Id":12536067,"CreationDate":"2012-09-21T16:54:00.000","Title":"How interpolate 3D coordinates","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've installed new instance of python-2.7.2 with brew. Installed numpy from pip, then from sources. I keep getting \nnumpy.distutils.npy_pkg_config.PkgNotFound: Could not find file(s) ['\/usr\/local\/Cellar\/python\/2.7.2\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/numpy\/core\/lib\/npy-pkg-config\/npymath.ini']\nwhen I try to install scipy, either from sources or by pip, and it drives me mad.\nScipy's binary installer tells me, that python 2.7 is required and that I don't have it (I have 2 versions installed).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":253,"Q_Id":12565351,"Users Score":0,"Answer":"EPD distribution saved the day.","Q_Score":0,"Tags":"numpy,python-2.7,scipy","A_Id":12616286,"CreationDate":"2012-09-24T12:49:00.000","Title":"Trouble installing scipy on Mac OSX Lion","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making live video GUI using Python and Glade-3, but I'm finding it hard to convert the Numpy array that I have into something that can be displayed in Glade. The images are in black and white with just a single value giving the brightness of each pixel. I would like to be able to draw over the images in the GUI so I don't know whether there is a specific format I should use (bitmap\/pixmap etc) ?\nAny help would be much appreciated!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":760,"Q_Id":12580198,"Users Score":2,"Answer":"In the end i decided to create a buffer for the pixels using: \nself.pixbuf = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,0,8,1280,1024)\nI then set the image from the pixel buffer:\nself.liveImage.set_from_pixbuf(self.pixbuf)","Q_Score":2,"Tags":"python,arrays,numpy,gtk,glade","A_Id":12638921,"CreationDate":"2012-09-25T09:37:00.000","Title":"How do you display a 2D numpy array in glade-3 ?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Arrayfire on Python and I can't use the af.sum() function since my input array has NaNs in it and it would return NAN as sum.\nUsing numpy.nansum\/numpy.nan_to_num is not an option due to speed problems.\nI just need a way to convert those NaNs to floating point zeros in arrayfire.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":299,"Q_Id":12582140,"Users Score":1,"Answer":"bottleneck is worth looking into. They have performed several optimizations over the numpy.nanxxx functions which, in my experience, makes it around 5x faster than numpy.","Q_Score":3,"Tags":"python,arrayfire","A_Id":12584253,"CreationDate":"2012-09-25T11:36:00.000","Title":"Check Arrayfire Array against NaNs","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to reconcile two separate dataframes. Each row within the two dataframes has a unique id that I am using to match the two dataframes. Without using a loop, how can I reconcile one dataframe against another and vice-versa? \nI tried merging the two dataframes on an index (unique id) but the problem I run into when I do this is when there are duplicate rows of data. Is there a way to identify duplicate rows of data and put that data into an array or export it to a CSV?\nYour help is much appreciated. Thanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":745,"Q_Id":12593759,"Users Score":0,"Answer":"Try DataFrame.duplicated and DataFrame.drop_duplicates","Q_Score":1,"Tags":"python,pandas","A_Id":12594030,"CreationDate":"2012-09-26T02:30:00.000","Title":"Pandas Data Reconcilation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing some code in fortran (f2py) in order to gain some speed because of a large amount of calculations that would be quite bothering to do in pure Python.\nI was wondering if setting NumPy arrays in Python as order=Fortran will kind of slow down\nthe main python code with respect to the classical C-style order.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":502,"Q_Id":12606027,"Users Score":1,"Answer":"There shouldn't be any slow-down. Since NumPy 1.6, most ufuncs (ie, the basic 'universal' functions) take an optional argument allowing a user to specify the memory layout of her array: by default, it's K, meaning that the 'the element ordering of the inputs (is matched) as closely as possible`. \nSo, everything should be taken care of below the hood.\nAt worst, you could always switch from one order to another with the order parameter of np.array (but that will copy your data and is probably not worth it).","Q_Score":1,"Tags":"python,performance,numpy,f2py","A_Id":12606715,"CreationDate":"2012-09-26T16:12:00.000","Title":"f2py speed with array ordering","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to write an algorithm that would benefit from the GPU's superior hashing capability over the CPU.\nIs PyOpenGL the answer? I don't want to use drawing tools, but simply run a \"vanilla\" python script ported to the GPU.\nI have an ATI\/AMD GPU if that means anything.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":389,"Q_Id":12699376,"Users Score":2,"Answer":"Is PyOpenGL the answer?\n\nNo. At least not in the way you expect it. If your GPU does support OpenGL-4.3 you could use Compute Shaders in OpenGL, but those are not written in Python\n\nbut simply run a \"vanilla\" python script ported to the GPU.\n\nThat's not how GPU computing works. You have to write the shaders of computation kernels in a special language. Either OpenCL or OpenGL Compute Shaders or, specific to NVIDIA, in CUDA.\nPython would then just deliver the framework for getting the GPU computation running.","Q_Score":1,"Tags":"python,opengl","A_Id":12699435,"CreationDate":"2012-10-02T22:30:00.000","Title":"Can normal algos run on PyOpenGL?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently I have a list of 110,000 donors in Excel. One of the pieces of information they give to us is their occupation. I would like to condense this list down to say 10 or 20 categories that I define.\nNormally I would just chug through this, going line by line, but since I have to do this for a years worth of data, I don't really have the time to do a line by line of 1,000,000+ rows.\nIs there anyway to define my 10 or 20 categories and then have python sort it out from there? \nUpdate:\nThe data is poorly formatted. People self populate a field either online or on a slip of paper and then mail it into a data processing company. There is a great deal of variance. CEO, Chief Executive, Executive Office, the list goes on. \nI used a SORT UNIQ comand and found that my list has ~13,000 different professions.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1266,"Q_Id":12711743,"Users Score":1,"Answer":"I assume that the data are noisy, in the sense that it could just be anything at all, written in. The main difficulty here is going to be how to define the mapping between your input data, and categories, and that is going to involve, in the first place, looking through the data.\nI suggest that you look at what you have, and draw up a list of mappings from input occupations to categories. You can then use pretty much any tool (and if you're using excel, stick with excel) to apply that mapping to each row. Some rows will not fall into any category. You should look at them, and figure out if that is because your mapping is inadequate (e.g. you didn't think of how to deal with veterinarians), or if it is because the data are noisy. If it's noise, you can either deal with the remainder by hand, or try to use some other technique to categorise the data, e.g. regular expressions or some kind of natural language processing library.\nOnce you have figured out what your problem cases are, come back and ask us about them, with sample data, and the code you have been using.\nIf you can't even take the first step in figuring out how to run the mapping, do some research, try to write something, then come back with a specific question about that.","Q_Score":3,"Tags":"python,list","A_Id":12711895,"CreationDate":"2012-10-03T15:27:00.000","Title":"categorizing items in a list with python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to compute similarities between users and text documents using their topic representations. I.e. each document and user is represented by a vector of topics (e.g. Neuroscience, Technology, etc) and how relevant that topic is to the user\/document.\nMy goal is then to compute the similarity between these vectors, so that I can find similar users, articles and recommended articles.\nI have tried to use Pearson Correlation but it ends up taking too much memory and time once it reaches ~40k articles and the vectors' length is around 10k.\nI am using numpy.\nCan you imagine a better way to do this? or is it inevitable (on a single machine)?\nThank you","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1318,"Q_Id":12713797,"Users Score":0,"Answer":"My tricks are using a search engine such as ElasticSearch, and it works very well, and in this way we unified the api of all our recommend systems. Detail is listed as below:\n\nTraining the topic model by your corpus, each topic is an array of words and each of the word is with a probability, and we take the first 6 most probable words as a representation of a topic.\nFor each document in your corpus, we can inference a topic distribution for it, the distribution is an array of probabilities for each topic.\nFor each document, we generate a fake document with the topic distribution and the representation of the topics, for example the size of the fake document is about 1024 words.\nFor each document, we generate a query with the topic distribution and the representation of the topics, for example the size of the query is about 128 words.\n\nAll preparation is finished as above. When you want to get a list of similar articles or others, you can just perform a search:\n\nGet the query for your document, and then perform a search by the query on your fake documents.\n\nWe found this way is very convenient.","Q_Score":3,"Tags":"python,numpy,recommendation-engine,topic-modeling,gensim","A_Id":15066821,"CreationDate":"2012-10-03T17:32:00.000","Title":"Topic-based text and user similarity","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a classifier that I trained using Python's scikit-learn. How can I use the classifier from a Java program? Can I use Jython? Is there some way to save the classifier in Python and load it in Java? Is there some other way to use it?","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":42287,"Q_Id":12738827,"Users Score":1,"Answer":"I found myself in a similar situation.\nI'll recommend carving out a classifier microservice. You could have a classifier microservice which runs in python and then expose calls to that service over some RESTFul API yielding JSON\/XML data-interchange format. I think this is a cleaner approach.","Q_Score":35,"Tags":"java,python,jython,scikit-learn","A_Id":50292755,"CreationDate":"2012-10-05T02:50:00.000","Title":"How can I call scikit-learn classifiers from Java?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to come up with a topic-based recommender system to suggest relevant text documents to users.\nI trained a latent semantic indexing model, using gensim, on the wikipedia corpus. This lets me easily transform documents into the LSI topic distributions. My idea now is to represent users the same way. However, of course, users have a history of viewed articles, as well as ratings of articles.\nSo my question is: how to represent the users?\nAn idea I had is the following: represent a user as the aggregation of all the documents viewed. But how to take into account the rating?\nAny ideas?\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":602,"Q_Id":12763608,"Users Score":0,"Answer":"\"represent a user as the aggregation of all the documents viewed\" : that might work indeed, given that you are in linear spaces. You can easily add all the documents vectors in one big vector.\nIf you want to add the ratings, you could simply put a coefficient in the sum.\nSay you group all documents rated 2 in a vector D2, rated 3 in D3 etc... you then simply define a user vector as U=c2*D2+c3*D3+...\nYou can play with various forms for c2, c3, but the easiest approach would be to simply multiply by the rating, and divide by the max rating for normalisation reasons. \nIf your max rating is 5, you could define for instance c2=2\/5, c3=3\/5 ...","Q_Score":2,"Tags":"python,machine-learning,recommendation-engine,latent-semantic-indexing,topic-modeling","A_Id":14583682,"CreationDate":"2012-10-06T20:31:00.000","Title":"User profiling for topic-based recommender system","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to come up with a topic-based recommender system to suggest relevant text documents to users.\nI trained a latent semantic indexing model, using gensim, on the wikipedia corpus. This lets me easily transform documents into the LSI topic distributions. My idea now is to represent users the same way. However, of course, users have a history of viewed articles, as well as ratings of articles.\nSo my question is: how to represent the users?\nAn idea I had is the following: represent a user as the aggregation of all the documents viewed. But how to take into account the rating?\nAny ideas?\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":602,"Q_Id":12763608,"Users Score":1,"Answer":"I don't think that's working with lsa. \nBut you maybe could do some sort of k-NN classification, where each user's coordinates are the documents viewed. Each object (=user) sends out radiation (intensity is inversely proportional to the square of the distance). The intensity is calculated from the ratings on the single documents.\nThen you can place a object (user) in in this hyperdimensional space, and see what other users give the most 'light'.\nBut: Can't Apache Lucene do that whole stuff for you?","Q_Score":2,"Tags":"python,machine-learning,recommendation-engine,latent-semantic-indexing,topic-modeling","A_Id":12764041,"CreationDate":"2012-10-06T20:31:00.000","Title":"User profiling for topic-based recommender system","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I read that Apriori algorithm is used to fetch association rules from the dataset like a set of tuples. It helps us to find the most frequent 1-itemsets, 2-itemsets and so-on. My problem is bit different. I have a dataset, which is a set of tuples, each of varying size - as follows :\n(1, 234, 56, 32)\n(25, 4575, 575, 464, 234, 32)\n. . . different size tuples\nThe domain for entries is huge, which means that I cannot have a binary vector for each tuple, that tells me if item 'x' is present in tuple. Hence, I do not see Apriori algorithm fitting here.\nMy target is to answer questions like :\n\nGive me the ranked list of 5 numbers, that occur with 234 most of the time\nGive me the top 5 subsets of size 'k' that occur most frequently together\n\nRequirements : Exact representation of numbers in output (not approximate), Domain of numbers can be thought of as 1 to 1 billion. \nI have planned to use the simple counting methods, if no standard algorithm fits here. But, if you guys know about some algorithm that can help me, please let me know","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":631,"Q_Id":12803495,"Users Score":1,"Answer":"For Apriori, you do not need to have tuples or vectors. It can be implemented with very different data types. The common data type is a sorted item list, which could as well look like 1 13 712 1928 123945 191823476 stored as 6 integers. This is essentially equivalent to a sparse binary vector and often very memory efficient. Plus, APRIORI is actually designed to run on data sets too large for your main memory!\nScalability of APRIORI is a mixture of the number of transactions and the number of items. Depending of how they are, you might prefer different data structures and algorithms.","Q_Score":1,"Tags":"python,data-mining,graph-algorithm,recommendation-engine,apriori","A_Id":12807402,"CreationDate":"2012-10-09T15:31:00.000","Title":"Algorithms for Mining Tuples of Data on huge sample space","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having a hard time understanding what scipy.cluster.vq really does!! \nOn Wikipedia it says Clustering can be used to divide a digital image into distinct regions for border detection or object recognition.\non other sites and books it says we can use clustering methods for clustering images for finding groups of similar images.\nAS i am interested in image processing ,I really need to fully understand what clustering is .\nSo\nCan anyone show me simple examples about using scipy.cluster.vq with images??","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1663,"Q_Id":12808050,"Users Score":0,"Answer":"The second is what clustering is: group objects that are somewhat similar (and that could be images). Clustering is not a pure imaging technique.\nWhen processing a single image, it can for example be applied to colors. This is a quite good approach for reducing the number of colors in an image. If you cluster by colors and pixel coordinates, you can also use it for image segmentation, as it will group pixels that have a similar color and are close to each other. But this is an application domain of clustering, not pure clustering.","Q_Score":1,"Tags":"python,scipy,cluster-analysis","A_Id":12810026,"CreationDate":"2012-10-09T20:39:00.000","Title":"Can anyone provide me with some clustering examples?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have numpy two dimension array P such that P[i, j] >= 0 and all P[i, j] sums to one. How to choose pair of indexes (i, j) with probability P[i, j] ?\nEDIT: I am interested in numpy build function. Is there something for this problem? May be for one dimensional array?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1166,"Q_Id":12810499,"Users Score":1,"Answer":"Here's a simple algorithm in python that does what you are expecting.\nLet's take for example a single dimension array P equal to [0.1,0.3,0.4,0.2]. The logic can be extended to any number of dimensions.\nNow we set each element to the sum of all the elements that precede it:\nP => [0, 0.1, 0.4, 0.8, 1]\nUsing a random generator, we generate numbers that are between 0 and 1. Let's say x = 0.2.\nUsing a simple binary search, we can determine that x is between the first element and the second element. We just pick the first element for this value of x.\nIf you look closely, the chance that 0 =< X < 0.1 is 0.1. The chance that 0.1 =< x < 0.4 is 0.3 and so on.\nFor the 2D array, it is better to convert it to a 1D array, even though, you should be able to implement a 2D array binary search algorithm.","Q_Score":2,"Tags":"python,numpy,statistics","A_Id":12810655,"CreationDate":"2012-10-10T00:56:00.000","Title":"randomly choose pair (i, j) with probability P[i, j] given stochastic matrix P","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've read a paper that uses ngram counts as feature for a classifier, and I was wondering what this exactly means.\nExample text: \"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam\"\nI can create unigrams, bigrams, trigrams, etc. out of this text, where I have to define on which \"level\" to create these unigrams. The \"level\" can be character, syllable, word, ...\nSo creating unigrams out of the sentence above would simply create a list of all words? \nCreating bigrams would result in word pairs bringing together words that follow each other? \nSo if the paper talks about ngram counts, it simply creates unigrams, bigrams, trigrams, etc. out of the text, and counts how often which ngram occurs? \nIs there an existing method in python's nltk package? Or do I have to implement a version of my own?","AnswerCount":4,"Available Count":1,"Score":-0.049958375,"is_accepted":false,"ViewCount":21407,"Q_Id":12821201,"Users Score":-1,"Answer":"I don't think there is a specific method in nltk to help with this. This isn't tough though. If you have a sentence of n words (assuming you're using word level), get all ngrams of length 1-n, iterate through each of those ngrams and make them keys in an associative array, with the value being the count. Shouldn't be more than 30 lines of code, you could build your own package for this and import it where needed.","Q_Score":14,"Tags":"python,nlp,nltk","A_Id":12821336,"CreationDate":"2012-10-10T14:01:00.000","Title":"What are ngram counts and how to implement using nltk?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have Toronto Stock Exchange stock data in a maxtor hard drive. The data is a TBF file with .dat and .pos components. The .dat file contains all the Stamp format transmission information in binary format.\nI can read .pos file using R. It has 3 column with numbers, which make no sense to me. The data is information on stock and I think it is the result of Streambase.\nI need to get 2007 price, value, and etc. information on some stocks that I am interested in.\nCould you please suggest any way to read the data? Should I use some particular software to make sense of this data?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":188,"Q_Id":12830437,"Users Score":0,"Answer":"Have you looked in using something like FileViewerPro? Its free download tool in order to open files. Also, what windows programs have you tried so far, like notepad, excel?","Q_Score":3,"Tags":"c#,python,sql,database,sml","A_Id":19391151,"CreationDate":"2012-10-11T00:25:00.000","Title":"How to read tbf file with STAMP encryption","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Python app built with Python, OpenCv and py2exe. \nWhen I distribute this app and try to run it on a windows XP machine, I have an error on startup due to error loading cv2.pyd (opencv python wrapper)\nI looked at cv2.pyd with dependency walker and noticed that some dlls are missing : ieshims.dll and wer.dll. Unfortunately copying these libs doesn't solve the issues some other dlls are missing or not up-to-date.\nAny idea?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":941,"Q_Id":12899513,"Users Score":5,"Answer":"The problem comes from 4 dlls which are copyied by py2exe: msvfw32.dll msacm32.dll, avicap32.dll and avifil32.dll\nAs I am building on Vista, I think that it forces the use of Vista dlls on Windows XP causing some mismatch when trying to load it.\nI removed these 4 dlls and everything seems to work ok (in this case it use the regular system dlls.)","Q_Score":1,"Tags":"python,opencv,windows-xp,py2exe","A_Id":12930212,"CreationDate":"2012-10-15T16:02:00.000","Title":"Python: OpenCV can not be loaded on windows xp","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I use numpy unique without sorting the result but just in the order they appear in the sequence? Something like this?\na = [4,2,1,3,1,2,3,4]\nnp.unique(a) = [4,2,1,3]\nrather than\nnp.unique(a) = [1,2,3,4]\nUse naive solution should be fine to write a simple function. But as I need to do this multiple times, are there any fast and neat way to do this?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":32836,"Q_Id":12926898,"Users Score":62,"Answer":"You can do this with the return_index parameter:\n\n>>> import numpy as np\n>>> a = [4,2,1,3,1,2,3,4]\n>>> np.unique(a)\narray([1, 2, 3, 4])\n>>> indexes = np.unique(a, return_index=True)[1]\n>>> [a[index] for index in sorted(indexes)]\n[4, 2, 1, 3]","Q_Score":36,"Tags":"python,numpy","A_Id":12926989,"CreationDate":"2012-10-17T03:58:00.000","Title":"numpy unique without sort","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a lengthy plot, composed o several horizontal subplots organized into a column.\nWhen I call fig.savefig('what.pdf'), the resulting output file shows all the plots crammed onto a single page.\nQuestion: is there a way to tell savefig to save on any number (possibly automatically determined) of pdf pages?\nI'd rather avoid multiple files and then os.system('merge ...'), if possible.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":16939,"Q_Id":12938568,"Users Score":1,"Answer":"I suspect that there is a more elegant way to do this, but one option is to use tempfiles or StringIO to avoid making traditional files on the system and then you can piece those together.","Q_Score":12,"Tags":"python,pdf,pagination,matplotlib","A_Id":12938704,"CreationDate":"2012-10-17T16:07:00.000","Title":"Matplotlib savefig into different pages of a PDF","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to apply a 3x3 or larger image filter (gaussian or median) on a 2-d array.\nThough there are several ways for doing that such as scipy.ndimage.gaussian_filter or applying a loop, I want to know if there is a way to apply a 3x3 or larger filter on each pixel of a mxn array simultaneously, because it would save a lot of time bypassing loops. Can functional programming be used for the purpose?? \nThere is a module called scipy.ndimage.filters.convolve, please tell whether it is able to perform simultaneous operations.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":854,"Q_Id":12968446,"Users Score":1,"Answer":"Even if python did provide functionality to apply an operation to an NxM array without looping over it, the operation would still not be executed simultaneously in the background since the amount of instructions a CPU can handle per cycle is limited and thus no time could be saved. For your use case this might even be counterproductive since the fields in your arrays proably have dependencies and if you don't know in what order they are accessed this will most likely end up in a mess.\nHugues provided some useful links about parallel processing in Python, but be careful when accessing the same data structure such as an array with multiple threads at the same time. If you don't synchronize the threads they might access the same part of the array at the same time and mess things up.\nAnd be aware, the amount of threads that can effectively be run in parallel is limited by the number of processor cores.","Q_Score":0,"Tags":"python,image,filter,numpy,scipy","A_Id":13009064,"CreationDate":"2012-10-19T06:20:00.000","Title":"Python: Perform an operation on each pixel of a 2-d array simultaneously","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to use Panda3D for my personal project, but after reading the documentation and some example sourcecodes, I still have a few questions: \n\nHow can I render just one frame and save it in a file? \nIn fact I would need to render 2 different images: a single object, and a scene of multiple objects including the previous single object, but just one frame for each and they both need to be saved as image files. \nThe application will be coded in Python, and needs to be very scalable (be used by thousand of users). Would Panda3D fit the bill here? (about my program in Python, it's almost a constant complexity so no problem here, and 3D models will be low-poly and about 5 to 20 per scene). \nI need to calculate the perspective projection of every object to the camera. Is it possible to directly access the vertexes and faces (position, parameters, etc..)? \nCan I recolor my 3D objects? I need to set a simple color for the whole object, but a different color per object. Is it possible? \n\nPlease also note that I'm quite a newbie in the field of graphical and game development, but I know some bits of 3D modelling and 3D theory, as well as computer imaging theory. \nThank you for reading me. \nPS: My main alternative currently is to use Soya3D or PySoy, but they don't seem to be very actively developped nor optimized, so although they would both have a smaller memory footprints, I don't know if they would really perform faster than Panda3D since they're not very optimized...","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2225,"Q_Id":12981607,"Users Score":2,"Answer":"You can use a buffer with setOneShot enabled to make it render only a single frame. You can start Panda3D without a window by setting the \"window-type\" PRC variable to \"none\", and then opening an offscreen buffer yourself. (Note: offscreen buffers without a host window may not be supported universally.)\nIf you set \"window-type\" to \"offscreen\", base.win will actually be a buffer (which may be a bit easier than having to set up your own), after which you can call base.graphicsEngine.render_frame() to render a single frame while avoiding the overhead of the task manager. You have to call it twice because of double-buffering.\nYes. Panda3D is used by Disney for some of their MMORPGs. I do have to add that Panda's high-level networking interfaces are poorly documented.\nYou can calculate the transformation from an object to the camera using nodepath.get_transform(base.cam), which is a TransformState object that you may optionally convert into a matrix using ts.get_mat(). This is surprisingly fast, since Panda maintains a composition cache for transformations so that this doesn't have to happen multiple times. You can get the projection matrix (from view space to clip space) using lens.get_projection_mat() or the inverse using lens.get_projection_mat_inv().\nYou may also access the individual vertex data using the Geom interfaces, this is described to detail in the Panda3D manual.\nYou can use set_color to change the base colour of the object (replacing any vertex colours), or you can use set_color_scale to tint the objects, ie. applying a colour that is multiplied with the existing colour. You can also apply a Material object if you use lights and want to use different colours for the diffuse, specular and ambient components.","Q_Score":1,"Tags":"python,3d,rendering,panda3d","A_Id":15449900,"CreationDate":"2012-10-19T19:58:00.000","Title":"Panda3D and Python, render only one frame and other questions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have four one dimensional lists: X1, Y1, X2, Y2. \n\nX1 and Y1 each have 203 data points. \nX2 and Y2 each have 1532 data points. \nX1 and X2 are at different intervals, but both measure time. \n\nI want to graph Y1 vs Y2.\nI can plot just fine once I get the interpolated data, but can't think of how to interpolate data. I've thought and researched this a couple hours, and just can't figure it out. I don't mind a linear interpolation, but just can't figure out a way.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2501,"Q_Id":12990315,"Users Score":0,"Answer":"If you use matplotlib, you can just call plot(X1, Y1, 'bo', X2, Y2, 'r+'). Change the formatting as you'd like, but it can cope with different lengths just fine. You can provide more than two without any issue.","Q_Score":3,"Tags":"python,graph,plot,interpolation","A_Id":12990987,"CreationDate":"2012-10-20T16:17:00.000","Title":"Data interpolation in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is probably very easy, but after looking through documentation and possible examples online for the past several hours I cannot figure it out.\nI have a large dataset (a spreadsheet) that gets heavily cleaned by a DO file. In the DO file I then want to save certain variables of the cleaned data as a temp .csv run some Python scripts, that produce a new CSV and then append that output to my cleaned data.\nIf that was unclear here is an example.\nAfter cleaning my data set (XYZ) goes from variables A to Z with 100 observations. I want to take variables A and D through F and save it as test.csv. I then want to run a python script that takes this data and creates new variables AA to GG. I want to then take that information and append it to the XYZ dataset (making the dataset now go from A to GG with 100 observations) and then be able to run a second part of my DO file for analysis.\nI have been doing this manually and it is fine but the file is going to start changing quickly and it would save me a lot of time.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2897,"Q_Id":13014789,"Users Score":0,"Answer":"Type \"help shell\" in Stata. What you want to do is shell out from Stata, call Python, and then have Stata resume whatever you want it to do after the Python script has completed.","Q_Score":1,"Tags":"python,merge,python-3.x,stata","A_Id":13016728,"CreationDate":"2012-10-22T15:35:00.000","Title":"Calling Python from Stata","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In the light of a project I've been playing with Python NLTK and Document Classification and the Naive Bayes classifier. As I understand from the documentation, this works very well if your different documents are tagged with either pos or neg as a label (or more than 2 labels)\nThe documents I'm working with that are already classified don't have labels, but they have a score, a floating point between 0 and 5. \nWhat I would like to do is build a classifier, like the movies example in the documentation, but that would predict the score of a piece of text, rather than the label. I believe this is mentioned in the docs but never further explored as 'probabilities of numeric features'\nI am not a language expert nor a statistician so if someone has an example of this lying around I would be most grateful if you would share this with me. Thanks!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1239,"Q_Id":13015593,"Users Score":0,"Answer":"This is a very late answer, but perhaps it will help someone.\nWhat you're asking about is regression. Regarding Jacob's answer, linear regression is only one way to do it. However, I agree with his recommendation of scikit-learn.","Q_Score":8,"Tags":"python,nltk","A_Id":15627502,"CreationDate":"2012-10-22T16:22:00.000","Title":"NLTK: Document Classification with numeric score instead of labels","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want an algorithm to detect if an image is of high professional quality or is done with poor contrast, low lighting etc. How do I go about designing such an algorithm. \nI feel that it is feasible, since if I press a button in picassa it tries to fix the lighting, contrast and color. Now I have seen that in good pictures if I press the auto-fix buttons the change is not that high as in the bad images. Could this be used as a lead? \nPlease throw any ideas at me. Also if this has already been done before, and I am doing the wheel invention thing, kindly stop me and point me to previous work. \nthanks much,","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":3360,"Q_Id":13018968,"Users Score":3,"Answer":"You are making this way too hard. I handled this in production code by generating a histogram of the image, throwing away outliers (1 black pixel doesn't mean that the whole image has lots of black; 1 white pixel doesn't imply a bright image), then seeing if the resulting distribution covered a sufficient range of brightnesses.\nIn stats terms, you could also see if the histogram approximates a Gaussian distribution with a satisfactorily large standard deviation. If the whole image is medium gray with a tiny stddev, then you have a low contrast image - by definition. If the mean is approximately medium-gray but the stddev covers brightness levels from say 20% to 80%, then you have a decent contrast.\nBut note that neither of these approaches require anything remotely resembling machine learning.","Q_Score":1,"Tags":"python,image-processing","A_Id":13019636,"CreationDate":"2012-10-22T20:05:00.000","Title":"How to automatically detect if image is of high quality?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to be use GridSearchCV to determine the parameters of a classifier, and using pipelines seems like a good option.\nThe application will be for image classification using Bag-of-Word features, but the issue is that there is a different logical pipeline depending on whether training or test examples are used. \nFor each training set, KMeans must run to produce a vocabulary that will be used for testing, but for test data no KMeans process is run. \nI cannot see how it is possible to specify this difference in behavior for a pipeline.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1601,"Q_Id":13057113,"Users Score":3,"Answer":"You probably need to derive from the KMeans class and override the following methods to use your vocabulary logic:\n\nfit_transform will only be called on the train data\ntransform will be called on the test data\n\nMaybe class derivation is not alway the best option. You can also write your own transformer class that wraps calls to an embedded KMeans model and provides the fit \/ fit_transform \/ transform API that is expected by the Pipeline class for the first stages.","Q_Score":2,"Tags":"python,machine-learning,scikit-learn","A_Id":13057566,"CreationDate":"2012-10-24T20:24:00.000","Title":"Using custom Pipeline for Cross Validation scikit-learn","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anyone know a fast algorithm to detect main colors in an image?\nI'm currently using k-means to find the colors together with Python's PIL but it's very slow. One 200x200 image takes 10 seconds to process. I've several hundred thousand images.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4441,"Q_Id":13060069,"Users Score":0,"Answer":"K-means is a good choice for this task because you know number of main colors beforehand. You need to optimize K-means. I think you can reduce your image size, just scale it down to 100x100 pixels or so. Find the size on witch your algorithm works with acceptable speed. Another option is to use dimensionality reduction before k-means clustering.\nAnd try to find fast k-means implementation. Writing such things in python is a misuse of python. It's not supposed to be used like this.","Q_Score":8,"Tags":"python,algorithm,colors,python-imaging-library","A_Id":13062863,"CreationDate":"2012-10-25T00:50:00.000","Title":"Fast algorithm to detect main colors in an image?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience.\nI want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then populate around 6-7 stagings tables in Oracle using this XML. \nThe data needs to be populated such that the elements of the XML and eventually the rows of the table come from multiple CSV files. So for e.g. an element A would have sub-elements coming data from CSV file 1, file 2 and file 3 etc.\nI have a framework built on Top of Apache Camel, Jboss on Linux. Oracle 10G is the database server.\nOptions I am considering,\n\nSmooks - However the problem is that Smooks serializes one CSV at a time and I cant afford to hold on to the half baked java beans til the other CSV files are read since I run the risk of running out of memory given the sheer number of beans I would need to create and hold on to before they are fully populated written to disk as XML.\nSQLLoader - I could skip the XML creation all together and load the CSV directly to the staging tables using SQLLoader. But I am not sure if I can a. load multiple CSV files in SQL Loader to the same tables updating the records after the first file. b. Apply some translation rules while loading the staging tables.\nPython script to convert the CSV to XML.\nSQLLoader to load a different set of staging tables corresponding to the CSV data and then writing stored procedure to load the actual staging tables from this new set of staging tables (a path which I want to avoid given the amount of changes to my existing framework it would need).\n\nThanks in advance. If someone can point me in the right direction or give me some insights from his\/her personal experience it will help me make an informed decision.\nregards,\n-v-\nPS: The CSV files are fairly simple with around 40 columns each. The depth of objects or relationship between the files would be around 2 to 3.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":2011,"Q_Id":13061800,"Users Score":1,"Answer":"Create a process \/ script that will call a procedure to load csv files to external Oracle table and another script to load it to the destination table.\nYou can also add cron jobs to call these scripts that will keep track of incoming csv files into the directory, process it and move the csv file to an output\/processed folder.\nExceptions also can be handled accordingly by logging it or sending out an email. Good Luck.","Q_Score":3,"Tags":"python,csv,etl,sql-loader,smooks","A_Id":14449025,"CreationDate":"2012-10-25T04:54:00.000","Title":"Choice of technology for loading large CSV files to Oracle tables","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience.\nI want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then populate around 6-7 stagings tables in Oracle using this XML. \nThe data needs to be populated such that the elements of the XML and eventually the rows of the table come from multiple CSV files. So for e.g. an element A would have sub-elements coming data from CSV file 1, file 2 and file 3 etc.\nI have a framework built on Top of Apache Camel, Jboss on Linux. Oracle 10G is the database server.\nOptions I am considering,\n\nSmooks - However the problem is that Smooks serializes one CSV at a time and I cant afford to hold on to the half baked java beans til the other CSV files are read since I run the risk of running out of memory given the sheer number of beans I would need to create and hold on to before they are fully populated written to disk as XML.\nSQLLoader - I could skip the XML creation all together and load the CSV directly to the staging tables using SQLLoader. But I am not sure if I can a. load multiple CSV files in SQL Loader to the same tables updating the records after the first file. b. Apply some translation rules while loading the staging tables.\nPython script to convert the CSV to XML.\nSQLLoader to load a different set of staging tables corresponding to the CSV data and then writing stored procedure to load the actual staging tables from this new set of staging tables (a path which I want to avoid given the amount of changes to my existing framework it would need).\n\nThanks in advance. If someone can point me in the right direction or give me some insights from his\/her personal experience it will help me make an informed decision.\nregards,\n-v-\nPS: The CSV files are fairly simple with around 40 columns each. The depth of objects or relationship between the files would be around 2 to 3.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2011,"Q_Id":13061800,"Users Score":2,"Answer":"Unless you can use some full-blown ETL tool (e.g. Informatica PowerCenter, Pentaho Data Integration), I suggest the 4th solution - it is straightforward and the performance should be good, since Oracle will handle the most complicated part of the task.","Q_Score":3,"Tags":"python,csv,etl,sql-loader,smooks","A_Id":13062737,"CreationDate":"2012-10-25T04:54:00.000","Title":"Choice of technology for loading large CSV files to Oracle tables","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I got linearsvc working against training set and test set using load_file method i am trying to get It working on Multiprocessor enviorment.\nHow can i get multiprocessing work on LinearSVC().fit() LinearSVC().predict()? I am not really familiar with datatypes of scikit-learn yet.\nI am also thinking about splitting samples into multiple arrays but i am not familiar with numpy arrays and scikit-learn data structures. \nDoing this it will be easier to put into multiprocessing.pool() , with that , split samples into chunks , train them and combine trained set back later , would it work ? \nEDIT:\nHere is my scenario:\nlets say , we have 1 million files in training sample set , when we want to distribute processing of Tfidfvectorizer on several processors we have to split those samples (for my case it will only have two categories , so lets say 500000 each samples to train) . My server have 24 cores with 48 GB , so i want to split each topics into number of chunks 1000000 \/ 24 and process Tfidfvectorizer on them. Like that i would do to Testing sample set , as well as SVC.fit() and decide(). Does it make sense? \nThanks. \nPS: Please do not close this .","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":11435,"Q_Id":13068257,"Users Score":11,"Answer":"For linear models (LinearSVC, SGDClassifier, Perceptron...) you can chunk your data, train independent models on each chunk and build an aggregate linear model (e.g. SGDClasifier) by sticking in it the average values of coef_ and intercept_ as attributes. The predict method of LinearSVC, SGDClassifier, Perceptron compute the same function (linear prediction using a dot product with an intercept_ threshold and One vs All multiclass support) so the specific model class you use for holding the average coefficient is not important.\nHowever as previously said the tricky point is parallelizing the feature extraction and current scikit-learn (version 0.12) does not provide any way to do this easily.\nEdit: scikit-learn 0.13+ now has a hashing vectorizer that is stateless.","Q_Score":10,"Tags":"python,multithreading,numpy,machine-learning,scikit-learn","A_Id":13084224,"CreationDate":"2012-10-25T12:10:00.000","Title":"Multiprocessing scikit-learn","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have three vectors in 3D a,b,c. Now I want to calculate a rotation r that when applied to a yields a result parallel to b. Then the rotation r needs to be applied to c. \nHow do I do this in python? Is it possible to do this with numpy\/scipy?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1659,"Q_Id":13136828,"Users Score":0,"Answer":"I'll assume the \"geometry library for python\" already answered in the comments on the question. So once you have a transformation that takes 'a' parallel to 'b', you'll just apply it to 'c'\nThe vectors 'a' and 'b' uniquely define a plane. Each vector has a canonical representation as a point difference from the origin, so you have three points: the head of 'a', the head of 'b', and the origin. First compute this plane. It will have an equation in the form Ax + By + Cz = 0. \nA normal vector to this plane defines both the axis of rotation and the sign convention for the direction of rotation. All you need is one normal vector to the plane, since they're all collinear. You can solve for such a vector by picking at two non-collinear vectors in the plane and taking the dot product with the normal vector. This gives a pair of linear equations in two variables that you can solve with standard methods such as Cramer's rule. In all of these manipulations, if any of A, B, or C are zero, you have a special case to handle.\nThe angle of the rotation is given by the cosine relation for the dot product of 'a' and 'b' and their lengths. The sign of the angle is determined by the triple product of 'a', 'b', and the normal vector. Now you've got all the data to construct a rotation matrix in one of the many canonical forms you can look up.","Q_Score":1,"Tags":"python,3d,geometry","A_Id":13242515,"CreationDate":"2012-10-30T10:15:00.000","Title":"Rotations in 3D","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My objective is to start with an \"empty\" matrix and repeatedly add columns to it until I have a large matrix.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":255,"Q_Id":13150020,"Users Score":2,"Answer":"Add columns to ndarray(or matrix) need full copy of the content, so you should use other method such as list or the array module, or create a large matrix first, and fill data in it.","Q_Score":1,"Tags":"python,numpy","A_Id":13150059,"CreationDate":"2012-10-31T01:28:00.000","Title":"Is it possible to create a numpy matrix with 10 rows and 0 columns?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a set of butterfly images for training my system to segment a butterfly from a given input image. For this purpose, I want to extract the features such as edges, corners, region boundaries, local maximum\/minimum intensity etc.\nI found many feature extraction methods like Harris corner detection, SIFT but they didn't work well when the image background had the same color as that of the butterfly's body\/boundary color. \nCould anyone please tell whether there is any good feature extraction method which works well for butterfly segmentation? I'm using the Python implementation of OpenCV.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2622,"Q_Id":13151428,"Users Score":2,"Answer":"Are you willing to write your own image processing logic? \nYour best option will likely be to optimize the segmentation\/feature extraction for your problem, instead of using previous implementations like opencv meant for more general use-cases.\nAn option that I've found to work well in noisy\/low-contrast environments is to use a sliding window (i.e. 10x10 pixels) and build a gradient orientation histogram. From this histogram you can recognize the presence of more dominant edges (they accumulate in the histogram) and their orientations (allowing for detection for things like corners) and see the local maximum\/minimums. (I can give more details if needed)\nIf your interested in segmentation as a whole AND user interaction is possible, I would recommend graph cut or grab cut. In graph cut users would be able to fine tune the segmentation. Grab cut is already in opencv, but may result in the same problems as it takes a single input from the user, then automatically segments the image.","Q_Score":5,"Tags":"python,image-processing,opencv,image-segmentation","A_Id":13162109,"CreationDate":"2012-10-31T04:47:00.000","Title":"Feature extraction for butterfly images","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a GUI python module that is best suited for the following job:\nI am trying to plot a graph with many columns (perhaps hundreds), each column representing an individual. The user should be able to drag the columns around and drop them onto different columns to switch the two. Also, there are going to be additional dots drawn on the columns and by hovering over those dots, the user should see the values corresponding to those dots. What is the best way to approach this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":244,"Q_Id":13151907,"Users Score":0,"Answer":"You can do what you want with Tkinter, though there's no specific widget that does what you ask. There is a general purpose canvas widget that allows you to draw objects (rectangles, circles, images, buttons, etc), and it's pretty easy to add the ability to drag those items around.","Q_Score":1,"Tags":"python,graph,matplotlib,tkinter,wxwidgets","A_Id":13156356,"CreationDate":"2012-10-31T05:45:00.000","Title":"Looking for a specific python gui module to perform the following task","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had developed scatter and lasso selection plots with Chaco. Now, I need to embed a BaseMap [with few markers on a map] onto the plot area side by side.\nI created a BaseMap and tried to add to the traits_view; but it is failing with errors.\nPlease give me some pointers to achieve the same.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":156,"Q_Id":13190187,"Users Score":0,"Answer":"Chaco and matplotlib are completely different tools. Basemap has been built on top of matplotlib so it is not possible to add a Basemap map on a Chaco plot.\nI'm afraid I couldn't find any mapping layer to go with Chaco. Is there a reason you cannot use matplotlib for you plot?","Q_Score":1,"Tags":"python,matplotlib-basemap,chaco","A_Id":16198408,"CreationDate":"2012-11-02T06:08:00.000","Title":"How to use BaseMap with chaco plots","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I found forecast package from R the best solution for time series analysis and forecasting. \nI want to use it in Python. Could I use rpy and after take the forecast package in Python?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1673,"Q_Id":13197097,"Users Score":5,"Answer":"Yes, you could use the [no longer developed or extended, but maintained] package RPy, or you could use the newer package RPy2 which is actively developed.\nThere are other options too, as eg headless network connections to Rserve.","Q_Score":1,"Tags":"python,time-series,forecasting","A_Id":13198574,"CreationDate":"2012-11-02T14:22:00.000","Title":"Forecast Package from R in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say I have some Python list, my_list which contains N elements. Single elements may be indexed by using my_list[i_1], where i_1 is the index of the desired element. However, Python lists may also be indexed my_list[i_1:i_2] where a \"slice\" of the list from i_1 to i_2 is desired. What is the Big-O (worst-case) notation to slice a list of size N?\nPersonally, if I were coding the \"slicer\" I would iterate from i_1 to i_2, generate a new list and return it, implying O(N), is this how Python does it?\nThank you,","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":47961,"Q_Id":13203601,"Users Score":10,"Answer":"For a list of size N, and a slice of size M, the iteration is actually only O(M), not O(N). Since M is often << N, this makes a big difference.\nIn fact, if you think about your explanation, you can see why. You're only iterating from i_1 to i_2, not from 0 to i_1, then I_1 to i_2.","Q_Score":61,"Tags":"python,list,big-o","A_Id":13203622,"CreationDate":"2012-11-02T21:59:00.000","Title":"Big-O of list slicing","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to integrate any Ordinary Differential Equation backward in time\nusing scipy.integrate.odeint ?\nIf it is possible, could someone tell me what should be the arguement 'time' in 'odeint.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4017,"Q_Id":13227115,"Users Score":0,"Answer":"You can make a change of variables s = t_0 - t, and integrate the differential equation with respect to s. odeint doesn't do this for you.","Q_Score":2,"Tags":"python-2.7,scipy","A_Id":13229534,"CreationDate":"2012-11-05T06:38:00.000","Title":"Backward integration in time using scipy odeint","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have a csv file with 400 columns. I cannot load the entire file into a DataFrame (won't fit in memory). However, I only really want 50 columns, and this will fit in memory. I don't see any built in Pandas way to do this. What do you suggest? I'm open to using the PyTables interface, or pandas.io.sql. \nThe best-case scenario would be a function like: pandas.read_csv(...., columns=['name', 'age',...,'income']). I.e. we pass a list of column names (or numbers) that will be loaded.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":7641,"Q_Id":13236098,"Users Score":2,"Answer":"There's no default way to do this right now. I would suggest chunking the file and iterating over it and discarding the columns you don't want.\nSo something like pd.concat([x.ix[:, cols_to_keep] for x in pd.read_csv(..., chunksize=200)])","Q_Score":9,"Tags":"python,pandas,csv","A_Id":13236277,"CreationDate":"2012-11-05T16:20:00.000","Title":"How to load only specific columns from csv file into a DataFrame","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am importing study data into a Pandas data frame using read_csv. \nMy subject codes are 6 numbers coding, among others, the day of birth. For some of my subjects this results in a code with a leading zero (e.g. \"010816\").\nWhen I import into Pandas, the leading zero is stripped of and the column is formatted as int64.\nIs there a way to import this column unchanged maybe as a string? \nI tried using a custom converter for the column, but it does not work - it seems as if the custom conversion takes place before Pandas converts to int.","AnswerCount":6,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":62070,"Q_Id":13250046,"Users Score":3,"Answer":"You Can do This , Works On all Versions of Pandas\npd.read_csv('filename.csv', dtype={'zero_column_name': object})","Q_Score":76,"Tags":"python,pandas,csv,types","A_Id":58968554,"CreationDate":"2012-11-06T11:27:00.000","Title":"How to keep leading zeros in a column when reading CSV with Pandas?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to sort a list of unknown values, either ints or floats or both, in ascending order. i.e, [2,-1,1.0] would become [-1,1.0,2]. Unfortunately, the sorted() function doesn't seem to work as it seems to sort in descending order by absolute value. Any ideas?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":5933,"Q_Id":13318611,"Users Score":2,"Answer":"I had the same problem. The answer: Python will sort numbers by the absolute value if you have them as strings. So as your key, make sure to include an int() or float() argument. My working syntax was\ndata = sorted(data, key = lambda x: float(x[0]))\n...the lambda x part just gives a function which outputs the thing you want to sort by. So it takes in a row in my list, finds the float 0th element, and sorts by that.","Q_Score":2,"Tags":"sorting,python-2.7,absolute-value","A_Id":29600848,"CreationDate":"2012-11-10T02:17:00.000","Title":"Sort a list of ints and floats with negative and positive values?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to sort a list of unknown values, either ints or floats or both, in ascending order. i.e, [2,-1,1.0] would become [-1,1.0,2]. Unfortunately, the sorted() function doesn't seem to work as it seems to sort in descending order by absolute value. Any ideas?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":5933,"Q_Id":13318611,"Users Score":0,"Answer":"In addition to doublefelix,below code gives the absolute order to me from string.\nsiparis=sorted(siparis, key=lambda sublist:abs(float(sublist[1])))","Q_Score":2,"Tags":"sorting,python-2.7,absolute-value","A_Id":65985248,"CreationDate":"2012-11-10T02:17:00.000","Title":"Sort a list of ints and floats with negative and positive values?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a gray image in which I want to map every pixel to N other matrices of size LxM.How do I initialize such a matrix?I tried\n result=numpy.zeros(shape=(i_size[0],i_size[1],N,L,M)) for which I get the Value Error 'array is too big'.Can anyone suggest an alternate method?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3179,"Q_Id":13321042,"Users Score":0,"Answer":"If I understand correctly, every pixel in the gray image is mapped to a single pixel in N other images. In that case, the map array is numpy.zeros((i.shape[0], i.shape[1], N, 2), dtype=numpy.int32) since you need to store 1 x and 1 y coordinate into each other N arrays, not the full Nth array every time. Using integer indices will further reduce memory use.\nThen result[y,x,N,0] and result[y,x,N,1] are the y and x mappings into the Nth image.","Q_Score":1,"Tags":"python,numpy","A_Id":13345287,"CreationDate":"2012-11-10T10:01:00.000","Title":"Creating a 5D array in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Naive Bayes classifier in python for text classification. Is there any smoothing methods to avoid zero probability for unseen words in python NLTK? Thanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1430,"Q_Id":13356348,"Users Score":2,"Answer":"I'd suggest to replace all the words with low (specially 1) frequency to , then train the classifier in this data. \nFor classifying you should query the model for in the case of a word that is not in the training data.","Q_Score":4,"Tags":"python,nltk,smoothing","A_Id":13397869,"CreationDate":"2012-11-13T06:20:00.000","Title":"Smoothing in python NLTK","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to use in Python the svm_model, generated in matlab? (I use libsvm)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":13383684,"Users Score":0,"Answer":"Normally you would just call a method in libsvm to save your model to a file. You then can just use it in Python using their svm.py. So yes, you can - it's all saved in libsvm format.","Q_Score":0,"Tags":"python,matlab,libsvm","A_Id":13445709,"CreationDate":"2012-11-14T17:09:00.000","Title":"Is it possible to use in Python the svm_model, generated in matlab?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let me explain what I'm trying to achieve. In the past while working on Java platform, I used to write Java codes(say, to push or pull data from MySQL database etc.) then create a war file which essentially bundles all the class files, supporting files etc and put it under a servlet container like Tomcat and this becomes a web service and can be invoked from any platform. \nIn my current scenario, I've majority of work being done in Java, however the Natural Language Processing(NLP)\/Machine Learning(ML) part is being done in Python using the NLTK, Scipy, Numpy etc libraries. I'm trying to use the services of this Python engine in existing Java code. Integrating the Python code to Java through something like Jython is not that straight-forward(as Jython does not support calling any python module which has C based extensions, as far as I know), So I thought the next option would be to make it a web service, similar to what I had done with Java web services in the past. Now comes the actual crux of the question, how do I run the ML engine as a web service and call the same from any platform, in my current scenario this happens to be Java. I tried looking in the web, for various options to achieve this and found things like CherryPy, Werkzeug etc but not able to find the right approach or any sample code or anything that shows how to invoke a NLTK-Python script and serve the result through web, and eventually replicating the functionality Java web service provides. In the Python-NLTK code, the ML engine does a data-training on a large corpus(this takes 3-4 minutes) and we don't want the Python code to go through this step every time a method is invoked. If I make it a web service, the data-training will happen only once, when the service starts and then the service is ready to be invoked and use the already trained engine. \nNow coming back to the problem, I'm pretty new to this web service things in Python and would appreciate any pointers on how to achieve this .Also, any pointers on achieving the goal of calling NLTK based python scripts from Java, without using web services approach and which can deployed on production servers to give good performance would also be helpful and appreciable. Thanks in advance.\nJust for a note, I'm currently running all my code on a Linux machine with Python 2.6, JDK 1.6 installed on it.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1090,"Q_Id":13394969,"Users Score":0,"Answer":"NLTK based system tends to be slow at response per request, but good throughput can be achieved given enough RAM.","Q_Score":4,"Tags":"python,machine-learning,cherrypy","A_Id":13399425,"CreationDate":"2012-11-15T09:51:00.000","Title":"How to expose an NLTK based ML(machine learning) Python Script as a Web Service?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What is the preferred way of doing the conversion using PIL\/Numpy\/SciPy today?","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55003,"Q_Id":13405956,"Users Score":0,"Answer":"At the moment I haven't found a good package to do that. You have to bear in mind that RGB is a device-dependent colour space so you can't convert accurately to XYZ or CIE Lab if you don't have a profile. \nSo be aware that many solutions where you see converting from RGB to CIE Lab without specifying the colour space or importing a colour profile must be carefully evaluated. Take a look at the code under the hood most of the time they assume that you are dealing with sRGB colour space.","Q_Score":49,"Tags":"python,numpy,scipy,python-imaging-library,color-space","A_Id":60718937,"CreationDate":"2012-11-15T20:49:00.000","Title":"Convert an image RGB->Lab with python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I noticed a bug in my program and the reason it is happening is because it seems that pandas is copying by reference a pandas dataframe instead of by value. I know immutable objects will always be passed by reference but pandas dataframe is not immutable so I do not see why it is passing by reference. Can anyone provide some information? \nThanks!\nAndrew","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":17836,"Q_Id":13419822,"Users Score":41,"Answer":"All functions in Python are \"pass by reference\", there is no \"pass by value\". If you want to make an explicit copy of a pandas object, try new_frame = frame.copy().","Q_Score":19,"Tags":"python,pandas","A_Id":13420016,"CreationDate":"2012-11-16T15:43:00.000","Title":"pandas dataframe, copy by value","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I aim to start opencv little by little but first I need to decide which API of OpenCV is more useful. I predict that Python implementation is shorter but running time will be more dense and slow compared to the native C++ implementations. Is there any know can comment about performance and coding differences between these two perspectives?","AnswerCount":5,"Available Count":2,"Score":0.1194272985,"is_accepted":false,"ViewCount":76398,"Q_Id":13432800,"Users Score":3,"Answer":"Why choose?\nIf you know both Python and C++, use Python for research using Jupyter Notebooks and then use C++ for implementation.\nThe Python stack of Jupyter, OpenCV (cv2) and Numpy provide for fast prototyping.\nPorting the code to C++ is usually quite straight-forward.","Q_Score":92,"Tags":"c++,python,performance,opencv","A_Id":66955473,"CreationDate":"2012-11-17T17:14:00.000","Title":"Does performance differ between Python or C++ coding of OpenCV?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I aim to start opencv little by little but first I need to decide which API of OpenCV is more useful. I predict that Python implementation is shorter but running time will be more dense and slow compared to the native C++ implementations. Is there any know can comment about performance and coding differences between these two perspectives?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":76398,"Q_Id":13432800,"Users Score":6,"Answer":"You're right, Python is almost always significantly slower than C++ as it requires an interpreter, which C++ does not. However, that does require C++ to be strongly-typed, which leaves a much smaller margin for error. Some people prefer to be made to code strictly, whereas others enjoy Python's inherent leniency.\nIf you want a full discourse on Python coding styles vs. C++ coding styles, this is not the best place, try finding an article.\nEDIT:\nBecause Python is an interpreted language, while C++ is compiled down to machine code, generally speaking, you can obtain performance advantages using C++. However, with regard to using OpenCV, the core OpenCV libraries are already compiled down to machine code, so the Python wrapper around the OpenCV library is executing compiled code. In other words, when it comes to executing computationally expensive OpenCV algorithms from Python, you're not going to see much of a performance hit since they've already been compiled for the specific architecture you're working with.","Q_Score":92,"Tags":"c++,python,performance,opencv","A_Id":13432830,"CreationDate":"2012-11-17T17:14:00.000","Title":"Does performance differ between Python or C++ coding of OpenCV?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My score was to get the most frequent color in a image, so I implemented a k-means algorithm.\nThe algorithm works good, but the result is not the one I was waiting for.\nSo now I'm trying to do some improvements, the first I thought was to implement k-means++, so I get a beter position for the inicial clusters centers.\nFirst I select a random point, but how can I select the others. I mean how I define the minimal distance between them.\nAny help for this? Thanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1239,"Q_Id":13436032,"Users Score":0,"Answer":"You can use a vector quantisation. You can make a list of each pixel and each adjacent pixel in x+1 and y+1 direction and pick the difference and plot it along a diagonale. Then you can calculate a voronoi diagram and get the mean color and compute a feature vector. It's a bit more effectice then to use a simple grid based mean color.","Q_Score":0,"Tags":"python,colors,cluster-computing,k-means","A_Id":13436279,"CreationDate":"2012-11-17T23:55:00.000","Title":"K-Means plus plus implementation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a visualization that models the trajectory of an object over a planar surface. Currently, the algorithm I have been provided with uses a simple trajectory function (where velocity and gravity are provided) and Runge-Kutta integration to check n points along the curve for a point where velocity becomes 0. We are discounting any atmospheric interaction.\nWhat I would like to do it introduce a non-planar surface, say from a digital terrain model (raster). My thought is to calculate a Reimann sum at each pixel and determine if the offset from the planar surface is equal to or less than the offset of the underlying topography from the planar surface.\nIs it possible, using numpy or scipy, to calculate the height of a Reimann rectangle? Conversely, the area of the rectangle (midpoint is fine) would work, as I know the width nd can calculate the height.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":329,"Q_Id":13460428,"Users Score":0,"Answer":"For computing Reimann sums you could look into numpy.cumsum(). I am not sure if you can do a surface or only an array with this method. However, you could always loop through all the rows of your terrain and store each row in a two dimensional array as you go. Leaving you with an array of all the terrain heights.","Q_Score":0,"Tags":"python,numpy,scipy","A_Id":13463491,"CreationDate":"2012-11-19T19:08:00.000","Title":"Scipy \/ Numpy Reimann Sum Height","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Assuming performance is not an issue, is there a way to deploy numpy in a environment where a compiler is unavailable, and no pre-built binaries can be installed? \nAlternatively, is there a pure-python numpy implementation?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":435,"Q_Id":13466939,"Users Score":3,"Answer":"a compiler is unavailable, and no pre-built binaries can be installed\n\nThis... makes numpy impossible. If you cannot install numpy binaries, and you cannot compile numpy source code, then you are left with no options.","Q_Score":2,"Tags":"python,numpy","A_Id":13467084,"CreationDate":"2012-11-20T05:08:00.000","Title":"Installing (and using) numpy without access to a compiler or binaries","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm extracting a large CSV file (200Mb) that was generated using R with Python (I'm the one using python).\nI do some tinkling with the file (normalization, scaling, removing junk columns, etc) and then save it again using numpy's savetxt with data delimiter as ',' to kee the csv property.\nThing is, the new file is almost twice as large than the original (almost 400Mb). The original data as well as the new one are only arrays of floats.\nIf it helps, it looks as if the new file has really small values, that need exponential values, which the original did not have.\nAny idea on why is this happening?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":280,"Q_Id":13491731,"Users Score":2,"Answer":"Have you looked at the way floats are represented in text before and after? You might have a line \"1.,2.,3.\" become \"1.000000e+0, 2.000000e+0,3.000000e+0\" or something like that, the two are both valid and both represent the same numbers.\nMore likely, however, is that if the original file contained floats as values with relatively few significant digits (for example \"1.1, 2.2, 3.3\"), after you do normalization and scaling, you \"create\" more digits which are needed to represent the results of your math but do not correspond to real increase in precision (for example, normalizing the sum of values to 1.0 in the last example gives \"0.1666666, 0.3333333, 0.5\").\nI guess the short answer is that there is no guarantee (and no requirement) for floats represented as text to occupy any particular amount of storage space, or less than the maximum possible per float; it can vary a lot even if the data remains the same, and will certainly vary if the data changes.","Q_Score":0,"Tags":"python,numpy","A_Id":13491927,"CreationDate":"2012-11-21T10:55:00.000","Title":"Numpy save file is larger than the original","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In time series we can find peak (min and max values). There are algorithms to find peaks. My question is:\nIn python are there libraries for peak detection in time series data?\nor \nsomething in R using RPy?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1640,"Q_Id":13520319,"Users Score":1,"Answer":"Calculate the derivation of your sample points, for example for every 5 points (THRESHOLD!) calculate the slope of the five points with Least squares methods (search on wiki if you dont know what it is. Any lineair regression function uses it). And when this slope is almost (THRESHOLD!) zero there is a peak.","Q_Score":3,"Tags":"python,r,time-series","A_Id":13520565,"CreationDate":"2012-11-22T21:37:00.000","Title":"Peak detection in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to track a multicolored object(4 colors). Currently, I am parsing the image into HSV and applying multiple color filter ranges on the camera feed and finally adding up filtered images. Then I filter the contours based on area. \nThis method is quite stable most of the time but when the external light varies a bit, the object is not recognized as the hue values are getting messed up and it is getting difficult to track the object. \nAlso since I am filtering the contours based on area I often have false positives and the object is not being tracked properly sometimes.\nDo you have any suggestion for getting rid of these problems. Could I use some other method to track it instead of filtering individually on colors and then adding up the images and searching for contours?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":475,"Q_Id":13526654,"Users Score":0,"Answer":"For a full proof track you need to combine more than one method...following are some of the hints...\n\nif you have prior knowledge of the object then you can use template matching...but template matching is little process intensive...if you are using GPU then you might have some benefit\nfrom your write up i presume external light varies to a lesser extent...so on that ground you can use goodfeaturestotrack function of opencv and use optical flow to track only those ponits found by goodfeaturestotrack in the next frames of the video\nif the background is stable except some brightness variation and the object is moving comparatively more than background then you can subtract the previous frame from the present frame to get the position of the moving object...this is kind of fast and easy change detection technique...\nFiltering contours based on area is good idea but try to add some more features to the filtering criteria...i.e. you can try filtering based on ellpticity,aspect ratio of the bounding box etc...\nlastly...if you have any prior knowledge about the motion path of the object you can use kalman filter...\nif the background is all most not variant or to some extent variant then you can try gaussian mixture model to model the background...while the changing ball is your fore ground...","Q_Score":2,"Tags":"python,opencv,tracking","A_Id":13530687,"CreationDate":"2012-11-23T09:57:00.000","Title":"Tracking a multicolor object","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to track a multicolored object(4 colors). Currently, I am parsing the image into HSV and applying multiple color filter ranges on the camera feed and finally adding up filtered images. Then I filter the contours based on area. \nThis method is quite stable most of the time but when the external light varies a bit, the object is not recognized as the hue values are getting messed up and it is getting difficult to track the object. \nAlso since I am filtering the contours based on area I often have false positives and the object is not being tracked properly sometimes.\nDo you have any suggestion for getting rid of these problems. Could I use some other method to track it instead of filtering individually on colors and then adding up the images and searching for contours?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":475,"Q_Id":13526654,"Users Score":0,"Answer":"You might try having multiple or an infinite number of models of the object depending upon the light sources available, and then classifying your object as either the object with one of the light sources or not the object. Note: this is a machine learning-type approach to the problem.\nFiltering with a Kalman, extended Kalman filter, or particle filter (depending on your application) would be a good idea, so that you can have a \"memory\" of the recently tracked features and have expectations for the next tracked color\/feature in the near term (i.e. if you just saw the object, there is a high likelihood that it hasn't disappeared in the next frame).\nIn general, this is a difficult problem that I have run into a few times doing robotics research. The only robust solution is to learn models and to confirm or deny them with what your system actually sees. Any number of machine learning approaches should work, but the easiest would probably be support vector machines. The most robust would probably be something like Gaussian Processes (if you want to do an infinite number of models). Good luck and don't get too frustrated; this is not an easy problem!","Q_Score":2,"Tags":"python,opencv,tracking","A_Id":13534342,"CreationDate":"2012-11-23T09:57:00.000","Title":"Tracking a multicolor object","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to do a execution time analysis of the bellman ford algorithm on a large number of graphs and in order to do that I need to generate a large number of random DAGS with the possibility of having negative edge weights.\nI am using networkx in python. There are a lot of random graph generators in the networkx library but what will be the one that will return the directed graph with edge weights and the source vertex.\nI am using networkx.generators.directed.gnc_graph() but that does not quite guarantee to return only a single source vertex.\nIs there a way to do this with or even without networkx?","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4642,"Q_Id":13543069,"Users Score":1,"Answer":"I noticed that the generated graphs have always exactly one sink vertex which is the first vertex. You can reverse direction of all edges to get a graph with single source vertex.","Q_Score":5,"Tags":"python,random,graph,networkx,bellman-ford","A_Id":13544567,"CreationDate":"2012-11-24T16:25:00.000","Title":"how to create random single source random acyclic directed graphs with negative edge weights in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"PyOpenGL docs say:\n\nBecause of the way OpenGL and ctypes handle, for instance, pointers, to array data, it is often necessary to ensure that a Python data-structure is retained (i.e. not garbage collected). This is done by storing the data in an array of data-values that are indexed by a context-specific key. The functions to provide this functionality are provided by the OpenGL.contextdata module.\n\nWhen exactly is it the case?\nOne situation I've got in my mind is client-side vertex arrays back from OpenGL 1, but they have been replaced by buffer objects for years. A client side array isn't required any more after a buffer object is filled (= right after glBufferData returns, I pressume).\nAre there any scenarios I'm missing?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":176,"Q_Id":13584900,"Users Score":1,"Answer":"Are there any scenarios I'm missing?\n\nBuffer mappings obtained through glMapBuffer","Q_Score":1,"Tags":"python,opengl,ctypes,pyopengl","A_Id":13585375,"CreationDate":"2012-11-27T13:04:00.000","Title":"What is PyOpenGL's \"context specific data\"?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using multiple threads to access and delete data in my pandas dataframe. Because of this, I am wondering is pandas dataframe threadsafe?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":17987,"Q_Id":13592618,"Users Score":18,"Answer":"The data in the underlying ndarrays can be accessed in a threadsafe manner, and modified at your own risk. Deleting data would be difficult as changing the size of a DataFrame usually requires creating a new object. I'd like to change this at some point in the future.","Q_Score":22,"Tags":"python,thread-safety,pandas","A_Id":13593942,"CreationDate":"2012-11-27T20:38:00.000","Title":"python pandas dataframe thread safe?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have read several related posts about installing numpy for python version 2.7 on a 64 bit windows7 OS. Before I try these, does anybody know if the 32bit version will work on a 64bit system?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":380,"Q_Id":13594953,"Users Score":0,"Answer":"It should work if you're using 32-bit Python. If you're using 64-bit Python you'll need 64-bit Numpy.","Q_Score":0,"Tags":"windows,numpy,python-2.7,64-bit","A_Id":13595084,"CreationDate":"2012-11-27T23:15:00.000","Title":"numpy for 64 bit windows","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have read several related posts about installing numpy for python version 2.7 on a 64 bit windows7 OS. Before I try these, does anybody know if the 32bit version will work on a 64bit system?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":380,"Q_Id":13594953,"Users Score":0,"Answer":"If you are getting it from pip and you want a 64 bit version of NumPy, you need MSVS 2008. pip needs to compile NumPy module with the same compiler that Python binary was compiled with.\nThe last I checked (this Summer), python's build.py on Windows only supported up to that version of MSVS. Probably because build.py isn't updated for compilers which are not clearly available for free as compile-only versions. There is an \"Express\" version of MSVS 2010, 2012 and 2013 (which would satisfy that requirement). But I am not sure if there is a dedicated repository for them and if they have a redistribution license. If there is, then the only problem is that no one got around to upgrading build.py to support the newer vertsions of MSVS.","Q_Score":0,"Tags":"windows,numpy,python-2.7,64-bit","A_Id":33553807,"CreationDate":"2012-11-27T23:15:00.000","Title":"numpy for 64 bit windows","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working on a project, a simple sentiment analyzer such that there will be 2 and 3 classes in separate cases. I am using a corpus that is pretty rich in the means of unique words (around 200.000). I used bag-of-words method for feature selection and to reduce the number of unique features, an elimination is done due to a threshold value of frequency of occurrence. The final set of features includes around 20.000 features, which is actually a 90% decrease, but not enough for intended accuracy of test-prediction. I am using LibSVM and SVM-light in turn for training and prediction (both linear and RBF kernel) and also Python and Bash in general.\nThe highest accuracy observed so far is around 75% and I need at least 90%. This is the case for binary classification. For multi-class training, the accuracy falls to ~60%. I need at least 90% at both cases and can not figure how to increase it: via optimizing training parameters or via optimizing feature selection?\nI have read articles about feature selection in text classification and what I found is that three different methods are used, which have actually a clear correlation among each other. These methods are as follows:\n\nFrequency approach of bag-of-words (BOW)\nInformation Gain (IG)\nX^2 Statistic (CHI)\n\nThe first method is already the one I use, but I use it very simply and need guidance for a better use of it in order to obtain high enough accuracy. I am also lacking knowledge about practical implementations of IG and CHI and looking for any help to guide me in that way.\nThanks a lot, and if you need any additional info for help, just let me know.\n\n\n@larsmans: Frequency Threshold: I am looking for the occurrences of unique words in examples, such that if a word is occurring in different examples frequently enough, it is included in the feature set as a unique feature. \n@TheManWithNoName: First of all thanks for your effort in explaining the general concerns of document classification. I examined and experimented all the methods you bring forward and others. I found Proportional Difference (PD) method the best for feature selection, where features are uni-grams and Term Presence (TP) for the weighting (I didn't understand why you tagged Term-Frequency-Inverse-Document-Frequency (TF-IDF) as an indexing method, I rather consider it as a feature weighting approach). Pre-processing is also an important aspect for this task as you mentioned. I used certain types of string elimination for refining the data as well as morphological parsing and stemming. Also note that I am working on Turkish, which has different characteristics compared to English. Finally, I managed to reach ~88% accuracy (f-measure) for binary classification and ~84% for multi-class. These values are solid proofs of the success of the model I used. This is what I have done so far. Now working on clustering and reduction models, have tried LDA and LSI and moving on to moVMF and maybe spherical models (LDA + moVMF), which seems to work better on corpus those have objective nature, like news corpus. If you have any information and guidance on these issues, I will appreciate. I need info especially to setup an interface (python oriented, open-source) between feature space dimension reduction methods (LDA, LSI, moVMF etc.) and clustering methods (k-means, hierarchical etc.).","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":30760,"Q_Id":13603882,"Users Score":1,"Answer":"Linear svm is recommended for high dimensional features. Based on my experience the ultimate limitation of SVM accuracy depends on the positive and negative \"features\". You can do a grid search (or in the case of linear svm you can just search for the best cost value) to find the optimal parameters for maximum accuracy, but in the end you are limited by the separability of your feature-sets. The fact that you are not getting 90% means that you still have some work to do finding better features to describe your members of the classes.","Q_Score":53,"Tags":"python,nlp,svm,sentiment-analysis,feature-extraction","A_Id":13615685,"CreationDate":"2012-11-28T11:21:00.000","Title":"Feature Selection and Reduction for Text Classification","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having a little bit of trouble creating a haar classifier. I need to build up a classifier to detect cars. At the moment I made a program in python that reads in an image, I draw a rectangle around the area the object is in, Once the rectangle is drawn, it outputs the image name, the top left and bottom right coordinates of the rectangle. I am unsure of where to go from here and how to actually build up the classifier. Can anyone offer me any help?\nEDIT*\nI am looking for help on how to use the opencv_traincascade. I have looked at the documentation but I can't quite figure out how to use it to create the xml file to be used in the detection program.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1867,"Q_Id":13611126,"Users Score":1,"Answer":"This looks like you need to determine what features you would like to train your classifier on first, as using the haar classifier it benefits from those extra features. From there you will need to train the classifier, this requires you to get a lot of images that have cars and those that do not have cars in them and then run it over this and having it tweak the average it is shooting for in order to classify to the best it can with your selected features.\nTo get a better classifier you will have to figure out the order of your features and the optimal order you put them together to further dive into the object and determine if it is in fact what you are looking for. Again this will require a lot of examples for your particular features and your problem as a whole.","Q_Score":1,"Tags":"python,opencv,machine-learning,computer-vision,object-detection","A_Id":13612350,"CreationDate":"2012-11-28T17:38:00.000","Title":"Creating a haar classifier using opencv_traincascade","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any ideas on how EdgeNgram treats numbers?\nI'm running haystack with an ElasticSearch backend. I created an indexed field of type EdgeNgram. This field will contain a string that may contain words as well as numbers.\nWhen I run a search against this field using a partial word, it works how it's supposed to. But if I put in a partial number, I'm not getting the result that I want.\nExample:\nI search for the indexed field \"EdgeNgram 12323\" by typing \"edgen\" and I'll get the index returned to me. If I search for that same index by typing \"123\" I get nothing.\nThoughts?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2671,"Q_Id":13636419,"Users Score":3,"Answer":"if you're using the edgeNGram tokenizer, then it will treat \"EdgeNGram 12323\" as a single token and then apply the edgeNGram'ing process on it. For example, if min_grams=1 max_grams=4, you'll get the following tokens indexed: [\"E\", \"Ed\", \"Edg\", \"Edge\"]. So I guess this is not what you're really looking for - consider using the edgeNGram token filter instead:\nIf you're using the edgeNGram token filter, make sure you're using a tokenizer that actually tokenizes the text \"EdgeNGram 12323\" to produce two tokens out of it: [\"EdgeNGram\", \"12323\"] (standard or whitespace tokenizer will do the trick). Then apply the edgeNGram filter next to it.\nIn general, edgeNGram will take \"12323\" and produce tokens such as \"1\", \"12\", \"123\", etc...","Q_Score":6,"Tags":"python,elasticsearch,django-haystack","A_Id":13637244,"CreationDate":"2012-11-29T23:05:00.000","Title":"ElasticSearch: EdgeNgrams and Numbers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have data and a time 'value' associated with it (Tx and X).\nHow can I perform a fast Fourier transform on my data.\nTx is an array I have and X is another array I have. The length of both arrays are of course the same and they are associated by Tx[i] with X[i] , where i goes from 0 to len(X).\nHow can I perform a fft on such data to ultimately achieve a Power Spectral Density plot frequency against |fft|^2.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":1901,"Q_Id":13636758,"Users Score":3,"Answer":"If the data is not uniformly sampled (i.e. Tx[i]-Tx[i-1] is constant), then you cannot do an FFT on it.\nHere's an idea:\nIf you have a pretty good idea of the bandwidth of the signal, then you could create a resampled version of the DFT basis vectors R. I.e. the complex sinusoids evaluated at the Tx times. Then solve the linear system x = A*z: where x is your observation, z is the unknown frequency content of the signal, and A is the resamapled DFT basis. Note that A may not actually be a basis depending on the severity of the non-uniformity. It will almost certainly not be an orthogonal basis like the DFT.","Q_Score":2,"Tags":"python,numpy,scipy,fft","A_Id":13645588,"CreationDate":"2012-11-29T23:33:00.000","Title":"Fast Fourier Transform (fft) with Time Associated Data Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I filter which lines of a CSV to be loaded into memory using pandas? This seems like an option that one should find in read_csv. Am I missing something?\nExample: we've a CSV with a timestamp column and we'd like to load just the lines that with a timestamp greater than a given constant.","AnswerCount":7,"Available Count":2,"Score":-1.0,"is_accepted":false,"ViewCount":95461,"Q_Id":13651117,"Users Score":-4,"Answer":"You can specify nrows parameter.\n\nimport pandas as pd\ndf = pd.read_csv('file.csv', nrows=100)\n\nThis code works well in version 0.20.3.","Q_Score":123,"Tags":"python,pandas","A_Id":53256590,"CreationDate":"2012-11-30T18:38:00.000","Title":"How can I filter lines on load in Pandas read_csv function?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I filter which lines of a CSV to be loaded into memory using pandas? This seems like an option that one should find in read_csv. Am I missing something?\nExample: we've a CSV with a timestamp column and we'd like to load just the lines that with a timestamp greater than a given constant.","AnswerCount":7,"Available Count":2,"Score":0.1137907297,"is_accepted":false,"ViewCount":95461,"Q_Id":13651117,"Users Score":4,"Answer":"If the filtered range is contiguous (as it usually is with time(stamp) filters), then the fastest solution is to hard-code the range of rows. Simply combine skiprows=range(1, start_row) with nrows=end_row parameters. Then the import takes seconds where the accepted solution would take minutes. A few experiments with the initial start_row are not a huge cost given the savings on import times. Notice we kept header row by using range(1,..).","Q_Score":123,"Tags":"python,pandas","A_Id":60026814,"CreationDate":"2012-11-30T18:38:00.000","Title":"How can I filter lines on load in Pandas read_csv function?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I realize Dataframe takes a map of {'series_name':Series(data, index)}. However, it automatically sorts that map even if the map is an OrderedDict().\nIs there a simple way to pass a list of Series(data, index, name=name) such that the order is preserved and the column names are the series.name? Is there an easy way if all the indices are the same for all the series? \nI normally do this by just passing a numpy column_stack of series.values and specifying the column names. However, this is ugly and in this particular case the data is strings not floats.","AnswerCount":7,"Available Count":1,"Score":0.1137907297,"is_accepted":false,"ViewCount":51167,"Q_Id":13653030,"Users Score":4,"Answer":"Check out DataFrame.from_items too","Q_Score":26,"Tags":"python,pandas","A_Id":13852311,"CreationDate":"2012-11-30T20:54:00.000","Title":"How do I Pass a List of Series to a Pandas DataFrame?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to implement a hadoop reducer for word counting.\nIn my reducer I use a hash table to count the words.But if my file is extremely large the hash table will use extreme amount of memory.How I can address this issue ?\n(E.g A file with 10 million lines each reducer receives 100million words how can he count the words a hash table requires 100million keys)\nMy current implementation is in python.\nIs there a smart way to reduce the amount of memory?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":442,"Q_Id":13663294,"Users Score":0,"Answer":"The most efficient way to do this is to maintain a hash map of word frequency in your mappers, and flush them to the output context when they reach a certain size (say 100,000 entries). Then clear out the map and continue (remember to flush the map in the cleanup method too).\nIf you still truely have 100 of millions of words, then you'll either need to wait a long time for the reducers to finish, or increase your cluster size and use more reducers.","Q_Score":0,"Tags":"python,hadoop,hadoop-streaming","A_Id":13663566,"CreationDate":"2012-12-01T20:12:00.000","Title":"Efficient Hadoop Word counting for large file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I have a pandas.DataFrame df with columns [\"A\", \"B\", \"C\", \"D\"], I can filter it using constructions like df[df[\"B\"] == 2].\nHow do I do the equivalent of df[df[\"B\"] == 2], if B is the name of a level in a MultiIndex instead? (For example, obtained by df.groupby([\"A\", \"B\"]).mean() or df.setindex([\"A\", \"B\"]))","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":148,"Q_Id":13701035,"Users Score":0,"Answer":"I see two ways of getting this, both of which look like a detour \u2013 which makes me think there must be a better way which I'm overlooking.\n\nConverting the MultiIndex into columns: df[df.reset_index()[\"B\"] == 2]\nSwapping the name I want to use to the start of the MultiIndex and then use lookup by index: df.swaplevel(0, \"B\").ix[2]","Q_Score":1,"Tags":"python,pandas","A_Id":13701036,"CreationDate":"2012-12-04T10:39:00.000","Title":"boolean indexing on index (instead of dataframe)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I have a pandas.DataFrame df with columns [\"A\", \"B\", \"C\", \"D\"], I can filter it using constructions like df[df[\"B\"] == 2].\nHow do I do the equivalent of df[df[\"B\"] == 2], if B is the name of a level in a MultiIndex instead? (For example, obtained by df.groupby([\"A\", \"B\"]).mean() or df.setindex([\"A\", \"B\"]))","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":148,"Q_Id":13701035,"Users Score":1,"Answer":"I would suggest either:\ndf.xs(2, level='B')\nor\ndf[df.index.get_level_values('B') == val]\nI'd like to make the syntax for the latter operation a little nicer.","Q_Score":1,"Tags":"python,pandas","A_Id":13755051,"CreationDate":"2012-12-04T10:39:00.000","Title":"boolean indexing on index (instead of dataframe)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am trying to create a realtime plot of data that is being recorded to a SQL server. The format is as follows:\nDatabase: testDB\nTable: sensors\nFirst record contains 3 records. The first column is an auto incremented ID starting at 1. The second column is the time in epoch format. The third column is my sensor data. It is in the following format:\n23432.32 112343.3 53454.322 34563.32 76653.44 000.000 333.2123\nI am completely lost on how to complete this project. I have read many pages showing examples dont really understand them. They provide source code, but I am not sure where that code goes. I installed httpd on my server and that is where I stand. Does anyone know of a good how-to from beginning to end that I could follow? Or could someone post a good step by step for me to follow?\nThanks for your help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":428,"Q_Id":13772857,"Users Score":0,"Answer":"Install a httpd server\nInstall php\nWrite a php script to fetch the data from the database and render it\nas a webpage.\n\nThis is fairly elaborate request, with relatively little details given. More information will allow us to give better answers.","Q_Score":0,"Tags":"python,mysql,flot","A_Id":13774224,"CreationDate":"2012-12-07T23:52:00.000","Title":"Plotting data using Flot and MySQL","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have this list called sumErrors that's 16000 rows and 1 column, and this list is already presorted into 5 different clusters. And what I'm doing is slicing the list for each cluster and finding the index of the minimum value in each slice.\nHowever, I can only find the first minimum index using argmin(). I don't think I can just delete the value, because otherwise it would shift the slices over and the indices is what I have to recover the original ID. Does anyone know how to get argmin() to spit out indices for the lowest three?\nOr perhaps a more optimal method? Maybe I should just assign ID numbers, but I feel like there maybe a more elegant method.","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":11034,"Q_Id":13783071,"Users Score":4,"Answer":"numpy.argpartition(cluster, 3) would be much more effective.","Q_Score":14,"Tags":"python,list,numpy,min","A_Id":37094880,"CreationDate":"2012-12-08T23:33:00.000","Title":"Finding the indices of the top three values via argmin() or min() in python\/numpy without mutation of list?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What does the error Numpy error: Matrix is singular mean specifically (when using the linalg.solve function)? I have looked on Google but couldn't find anything that made it clear when this error occurs.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60944,"Q_Id":13795682,"Users Score":25,"Answer":"A singular matrix is one that is not invertible. This means that the system of equations you are trying to solve does not have a unique solution; linalg.solve can't handle this. \nYou may find that linalg.lstsq provides a usable solution.","Q_Score":13,"Tags":"python,numpy","A_Id":13795874,"CreationDate":"2012-12-10T05:47:00.000","Title":"Numpy error: Singular matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My question is about the algorithm which is used in Numpy's FFT function.\nThe documentation of Numpy says that it uses the Cooley-Tukey algorithm. However, as you may know, this algorithm works only if the number N of points is a power of 2. \nDoes numpy pad my input vector x[n] in order to calculate its FFT X[k]? (I don't think so, since the number of points I have in the output is also N). How could I actually \"see\" the code which is used by numpy for its FFT function?\nCheers!","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":6896,"Q_Id":13841296,"Users Score":2,"Answer":"In my experience the algorithms don't do automatic padding, or at least some of them don't. For example, running the scipy.signal.hilbert method on a signal that wasn't of length == a power of two took about 45 seconds. When I padded the signal myself with zeros to such a length, it took 100ms.\nYMMV but it's something to double check basically any time you run a signal processing algorithm.","Q_Score":4,"Tags":"python,numpy,fft","A_Id":19329962,"CreationDate":"2012-12-12T13:52:00.000","Title":"FFT in Numpy (Python) when N is not a power of 2","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script I hope to do roughly this:\n\ncalls some particle positions into an array\nruns algorithm over all 512^3 positions to distribute them to an NxNxN matrix\nfeed that matrix back to python\nuse plotting in python to visualise matrix (i.e. mayavi)\n\nFirst I have to write it in serial but ideally I want to parrallelize step 2 to speed up computation. What tools\/strategy might get me started. I know Python and Fortran well but not much about how to connect the two for my particular problem. At the moment I am doing everything in Fortran then loading my python program - I want to do it all at once.I've heard of py2f but I want to get experienced people's opinions before I go down one particular rabbit hole. Thanks\nEdit: The thing I want to make parallel is 'embarrassingly parallel' in that is is just a loop of N particles and I want to get through that loop as quickly as possible.","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":929,"Q_Id":13852646,"Users Score":2,"Answer":"An alternative approach to VladimirF's suggestion, could be to set up the two parts as a client server construct, where your Python part could talk to the Fortran part using sockets. Though this comes with the burden to implement some protocol for the interaction, it has the advantage, that you get a clean separation and can even go on running them on different machines with an interaction over the network.\nIn fact, with this approach you even could do the embarrassing parallel part, by spawning as many instances of the Fortran application as needed and feed them all with different data.","Q_Score":5,"Tags":"python,arrays,parallel-processing,fortran,f2py","A_Id":13858423,"CreationDate":"2012-12-13T03:51:00.000","Title":"I want Python as front end, Fortran as back end. I also want to make fortran part parallel - best strategy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to do K-Means Clustering using Kruskal's Minimum Spanning Tree Algorithm. My original design was to run the full-length Kruskal algorithm of the input and produce an MST, after which delete the last k-1 edges (or equivalently k-1 most expensive edges).\nOf course this is the same as running Kruskal algorithm and stopping it just before it adds its last k-1 edges.\nI want to use the second strategy i.e instead of running the full length Kruskal algorithm, stop it just after the number of clusters so far equals K. I'm using Union-Find data structure and using a list object in this Union-Find data structure.\nEach vertex on this graph is represented by its current cluster on this list e.g [1,2,3...] means vertices 1,2,3 are in their distinct independent clusters. If two vertices are joined their corresponding indices on the list data structure are updated to reflect this.\ne.g merging vertices 2 and 3 leaves the list data object as [1,2,2,4,5.....]\nMy strategy is then every time two nodes are merged, count the number of DISTINCT elements in the list and if it equals the number of desired clusters, stop. My worry is that this may not be the most efficient option. Is there a way I could count the number of distinct objects in a list efficiently?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":296,"Q_Id":13875584,"Users Score":1,"Answer":"One way is to sort your list and then run over the elements by comparing each one to the previous one. If they are not equal sum 1 to your \"distinct counter\". This operation is O(n), and for sorting you can use the sorting algorithm you prefer, such as quick sort or merge sort, but I guess there is an available sorting algorithm in the lib you use.\nAnother option is to create a hash table and add all the elements. The number of insertions will be the distinct elements, since repeated elements will not be inserted. I think this is O(1) in the best case so maybe this is the better solution. Good luck!\nHope this helps,\nD\u00eddac P\u00e9rez","Q_Score":0,"Tags":"python,python-3.x,k-means","A_Id":13875710,"CreationDate":"2012-12-14T09:09:00.000","Title":"Efficient way to find number of distinct elements in a list","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"New to Python.\nIn R, you can get the dimension of a matrix using dim(...). What is the corresponding function in Python Pandas for their data frame?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":147970,"Q_Id":13921647,"Users Score":165,"Answer":"df.shape, where df is your DataFrame.","Q_Score":104,"Tags":"python,pandas","A_Id":13921674,"CreationDate":"2012-12-17T20:27:00.000","Title":"Python - Dimension of Data Frame","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sklearn.svm.SVC doesn't give the index of support vectors for sparse dataset. Is there any hack\/way to get the index of SVs?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":265,"Q_Id":13982983,"Users Score":0,"Answer":"Not without going to the cython code I am afraid. This has been on the todo list for way to long. Any help with it would be much appreciated. It shouldn't be too hard, I think.","Q_Score":2,"Tags":"python,machine-learning,libsvm,scikit-learn,scikits","A_Id":13986712,"CreationDate":"2012-12-21T01:14:00.000","Title":"sklearn.svm.SVC doesn't give the index of support vectors for sparse dataset?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Pylot 1.26 with Python 2.7 on Windows 7 64bit having installed Numpy 1.6.2 and Matplotlib 1.1.0. The test case executes and produces a report but the response time graph is empty (no data) and the throughput graph is just one straight line. \nI've tried the 32 bit and 64 bit installers but the result is the same.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":682,"Q_Id":13989166,"Users Score":0,"Answer":"I had the same identical problem. I spent sometime on it today debugging few things, I realized the problem with me was that the data collected to plot charts wasn't correct and i needed to adjust. What I did was just changing the time from absolute to relative and dynamically adjusting the range of the axis. I'm not that good in python and so my code doesn't look that good.","Q_Score":0,"Tags":"python,numpy,matplotlib,pylot","A_Id":15980514,"CreationDate":"2012-12-21T11:18:00.000","Title":"Why are my Pylot graphs blank?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"After plotting streamlines using 'matplotlib.streamplot' I need to change the U V data and update the plot. For imshow and quiver there are the functions 'set_data' and 'set_UVC', respectively. There does not seem to be any similar function for streamlines. Is there any way to still updateget similar functionality?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1159,"Q_Id":14020155,"Users Score":0,"Answer":"I suspect the answer is no, because if you change the vectors, it would need to re-compute the stream lines. The objects returned by streamline are a line and patch collections, which know nothing about the vectors. To get this functionality would require writing a new class to wrap everything up and finding a sensible way to re-use the existing objects.\nThe best bet is to use cla() (as suggested by dmcdougall) to clear your axes and just re-plot them. A slightly less drastic approach would be to just remove the artists added by streamplot.","Q_Score":4,"Tags":"python,matplotlib,scipy","A_Id":15859052,"CreationDate":"2012-12-24T10:35:00.000","Title":"update U V data for matplotlib streamplot","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"quite new to OpenCV so please bear with me:\nI need to open up a temporary window for user input, but I need to be certain it won't overwrite a previously opened window.\nIs there a way to open up either an anonymous window, or somehow create a guaranteed unique window name?\nObviously a long random string would be pretty safe, but that seems like a hack.\nP.S. I'm using the python bindings at the moment, but If you want to write a response in c\/c++ that's fine, I'm familiar with them.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":343,"Q_Id":14035161,"Users Score":2,"Answer":"In modules\/highgui\/src\/window_w32.cpp(or in some other file if you are not using windows - look at void cv::namedWindow( const string& winname, int flags ) in ...src\/window.cpp) there is a function static CvWindow* icvFindWindowByName( const char* name ) which probably is what you need, but it's internal so authors of OpenCV for some reason didn't want other to use it(or doesn't know someone may need it). \nI think that the best option is to use system api to find whether a window with specific name exists.\nEventually use something that is almost impossible to be a window name, for example current time in ms + user name + random number + random string(yeah i know that window name \"234564312cyriel123234123dgbdfbddfgb#$%grw$\" is not beautiful).","Q_Score":2,"Tags":"python,opencv","A_Id":14048691,"CreationDate":"2012-12-26T01:23:00.000","Title":"OpenCV anonymous\/guaranteed unique window","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of X and Y coordinates from geodata of a specific part of the world. I want to assign each coordinate, a weight, based upon where it lies in the graph.\nFor Example: If a point lies in a place where there are a lot of other nodes around it, it lies in a high density area, and therefore has a higher weight. \nThe most immediate method I can think of is drawing circles of unit radius around each point and then calculating if the other points lie within in and then using a function, assign a weight to that point. But this seems primitive.\nI've looked at pySAL and NetworkX but it looks like they work with graphs. I don't have any edges in the graph, just nodes.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":12400,"Q_Id":14070565,"Users Score":1,"Answer":"Yes, you do have edges, and they are the distances between the nodes. In your case, you have a complete graph with weighted edges.\nSimply derive the distance from each node to each other node -- which gives you O(N^2) in time complexity --, and use both nodes and edges as input to one of these approaches you found.\nHappens though your problem seems rather an analysis problem other than anything else; you should try to run some clustering algorithm on your data, like K-means, that clusters nodes based on a distance function, in which you can simply use the euclidean distance.\nThe result of this algorithm is exactly what you'll need, as you'll have clusters of close elements, you'll know what and how many elements are assigned to each group, and you'll be able to, according to these values, generate the coefficient you want to assign to each node.\nThe only concern worth pointing out here is that you'll have to determine how many clusters -- k-means, k-clusters -- you want to create.","Q_Score":7,"Tags":"python","A_Id":14070812,"CreationDate":"2012-12-28T13:53:00.000","Title":"Calculating Point Density using Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a standard practice for representing vectors as 1d or 2d ndarrays in NumPy? I'm moving from MATLAB which represents vectors as 2d arrays.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1655,"Q_Id":14126201,"Users Score":1,"Answer":"In matlab (for historical reason I would argue) the basic type is an M-by-N array (matrix) so that scalars are 1-by-1 arrays and vectors either N-by-1 or 1-by-N arrays. (Memory layout is always Fortran style).\nThis \"limitation\" is not present in numpy: you have true scalars and ndarray's can have as many dimensions you like. (Memory layout can be C or Fortran-contigous.) For this reason there is no preferred (standard) practice. It is up to you, according to your application, to choose the one which better suits your needs.","Q_Score":5,"Tags":"python,matlab,numpy,linear-algebra","A_Id":14126790,"CreationDate":"2013-01-02T17:14:00.000","Title":"Performance\/standard using 1d vs 2d vectors in numpy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two 2-D arrays with the same shape (105,234) named A & B essentially comprised of mean values from other arrays. I am familiar with Python's scipy package, but I can't seem to find a way to test whether or not the two arrays are statistically significantly different at each individual array index. I'm thinking this is just a large 2D paired T-test, but am having difficulty. Any ideas or other packages to use?","AnswerCount":4,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":7302,"Q_Id":14176280,"Users Score":-2,"Answer":"Go to MS Excel. If you don't have it your work does, there are alternatives \nEnter the array of numbers in Excel worksheet. Run the formula in the entry field, =TTEST (array1,array2,tail). One tail is one, Two tail is two...easy peasy. It's a simple Student's T and I believe you may still need a t-table to interpret the statistic (internet). Yet it's quick for on the fly comparison of samples.","Q_Score":0,"Tags":"python,arrays,numpy,statistics,scipy","A_Id":26791595,"CreationDate":"2013-01-05T20:49:00.000","Title":"Test for statistically significant difference between two arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to analyze microarray data using hierarchical clustering of the microarray columns (results from the individual microarray replicates) and PCA. \nI'm new to python. I have python 2.7.3, biopyhton, numpy, matplotlib, and networkx.\nAre there functions in python or biopython (similar to MATLAB's clustergram and mapcaplot) that I can use to do this?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1029,"Q_Id":14191487,"Users Score":1,"Answer":"I recommend to use R Bioconductor and free software like Expander and MeV. Good flexible choice is a Cluster software with TreeViews. You can also run R and STATA or JMP from your Python codes and completely automate your data management.","Q_Score":3,"Tags":"python,bioinformatics,pca,biopython,hierarchical-clustering","A_Id":28408552,"CreationDate":"2013-01-07T07:14:00.000","Title":"Microarray hierarchical clustering and PCA with python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using python to prototype the algorithms of a computer vision system I'm creating. I would like to be able to easily log heterogeneous data, for example: images, numpy arrays, matplotlib plots, etc, from within the algorithms, and do that using two keys, one for the current frame number and another to describe the logged object. Then I would like to be able to browse all the data from a web browser. Finally, I would like to be able to easily process the logs to generate summaries, for example retrieve the key \"points\" for all the frame numbers and calculate some statistics on them. My intention is to use this logging subsystem to facilitate debugging the behaviour of the algorithms and produce summaries for benchmarking. \nI'm set to create this subsystem myself but I thought to ask first if someone has already done something similar. Does anybody know of any python package that I can use to do what I ask?\notherwise, does anybody have any advice on which tools to use to create this myself?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":281,"Q_Id":14201284,"Users Score":1,"Answer":"Another option for storage could be using hdf5 or pytables. Depending on how you structure the data, with pytables you can query the data at key \"points\". As noted in comments, I dont think an off the shelf solution exists.","Q_Score":1,"Tags":"python,logging,numpy,matplotlib","A_Id":14223556,"CreationDate":"2013-01-07T17:50:00.000","Title":"heterogeneous data logging and analysis","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a library, example or similar that allows me to loads a set of 2D projections of an object and then converts it into a 3D volume.\nFor example, I could have 6 pictures of a small toy and the program should allow me to view it as a 3D volume and eventually save it.\nThe object I need to convert is very similar to a cylinder (so the program doesn't have to 'understand' what type of object it is).","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1288,"Q_Id":14232451,"Users Score":2,"Answer":"There are several things you can mean, I think none of which currently exists in free software (but I may be wrong about that), and they differ in how hard they are to implement:\nFirst of all, \"a 3D volume\" is not a clear definition of what you want. There is not one way to store this information. A usual way (for computer games and animations) is to store it as a mesh with textures. Getting the textures is easy: you have the photographs. Creating the mesh can be really hard, depending on what exactly you want.\nYou say your object looks like a cylinder. If you want to just stitch your images together and paste them as a texture over a cylindrical mesh, that should be possible. If you know the angles at which the images are taken, the stitching will be even easier.\nHowever, the really cool thing that most people would want is to create any mesh, not just a cylinder, based on the stitching \"errors\" (which originate from the parallax effect, and therefore contain information about the depth of the pictures). I know Autodesk (the makers of AutoCAD) have a web-based tool for this (named 123-something), but they don't let you put it into your own program; you have to use their interface. So it's fine for getting a result, but not as a basis for a program of your own.\nOnce you have the mesh, you'll need a viewer (not view first, save later; it's the other way around). You should be able to use any 3D drawing program, for example Blender can view (and edit) many file types.","Q_Score":0,"Tags":"python,image-processing,3d,2d","A_Id":14233016,"CreationDate":"2013-01-09T09:53:00.000","Title":"2D image projections to 3D Volume","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a relatively simple function with three unknown input parameters for which I only know the upper and lower bounds. I also know what the output Y should be for all of my data. \nSo far I have done a simple grid search in python, looping through all of the possible parameter combinations and returning those results where the error between Y predicted and Y observed is within a set limit. \nI then look at the results to see which set of parameters performs best for each group of samples, look at the trade-off between parameters, see how outliers effect the data etc.. \nSo really my questions is - whilst the grid search method I'm using is a bit cumbersome, what advantages would there be in using Monte Carlo methods such as metropolis hastings instead?\nI am currently researching into MCMC methods, but don\u2019t have any practical experience in using them and, in this instance, can\u2019t quite see what might be gained.\nI\u2019d greatly appreciate any comments or suggestions\nMany Thanks","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":983,"Q_Id":14236371,"Users Score":2,"Answer":"When the search space becomes larger, it can become infeasible to do an exhaustive search. So we turn to Monte Carlo methods out of necessity.","Q_Score":1,"Tags":"python,montecarlo","A_Id":14236501,"CreationDate":"2013-01-09T13:31:00.000","Title":"Advantage of metropolis hastings or MonteCarlo methods over a simple grid search?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am totally new to Python, and I have to use some modules in my code, like numpy and scipy, but I have no permission on my hosting to install new modules using easy-install or pip ( and of course I don't know how to install new modules in a directory where I have permission [ I have SSH access ] ).\nI have downloaded numpy and used from numpy import * but it doesn't work. I also tried the same thing with scipy : from scipy import *, but it also don't work.\nHow to load \/ use new modules in Python without installing them [ numpy, scipy .. ] ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1476,"Q_Id":14242764,"Users Score":0,"Answer":"Use the --user option to easy_install or setup.py to indicate where the installation is to take place. It should point to a directory where you have write access.\nOnce the module has been built and installed, you then need to set the environmental variable PYTHONPATH to point to that location. When you next run the python command, you should be able to import the module.","Q_Score":1,"Tags":"python,numpy,scipy,python-module","A_Id":14242912,"CreationDate":"2013-01-09T17:18:00.000","Title":"use \/ load new python module without installation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: \"Registered online\", \"Accepts email notifications\" etc) and continuous data (ex: \"Age\", \"Length of membership\" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!","AnswerCount":6,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":29596,"Q_Id":14254203,"Users Score":15,"Answer":"The simple answer: multiply result!! it's the same.\nNaive Bayes based on applying Bayes\u2019 theorem with the \u201cnaive\u201d assumption of independence between every pair of features - meaning you calculate the Bayes probability dependent on a specific feature without holding the others - which means that the algorithm multiply each probability from one feature with the probability from the second feature (and we totally ignore the denominator - since it is just a normalizer).\nso the right answer is:\n\ncalculate the probability from the categorical variables.\ncalculate the probability from the continuous variables.\nmultiply 1. and 2.","Q_Score":76,"Tags":"python,machine-learning,data-mining,classification,scikit-learn","A_Id":34036255,"CreationDate":"2013-01-10T09:08:00.000","Title":"Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: \"Registered online\", \"Accepts email notifications\" etc) and continuous data (ex: \"Age\", \"Length of membership\" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":29596,"Q_Id":14254203,"Users Score":0,"Answer":"You will need the following steps:\n\nCalculate the probability from the categorical variables (using predict_proba method from BernoulliNB)\nCalculate the probability from the continuous variables (using predict_proba method from GaussianNB)\nMultiply 1. and 2. AND\nDivide by the prior (either from BernoulliNB or from GaussianNB since they are the same) AND THEN\nDivide 4. by the sum (over the classes) of 4. This is the normalisation step.\n\nIt should be easy enough to see how you can add your own prior instead of using those learned from the data.","Q_Score":76,"Tags":"python,machine-learning,data-mining,classification,scikit-learn","A_Id":69929209,"CreationDate":"2013-01-10T09:08:00.000","Title":"Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: \"Registered online\", \"Accepts email notifications\" etc) and continuous data (ex: \"Age\", \"Length of membership\" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!","AnswerCount":6,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":29596,"Q_Id":14254203,"Users Score":74,"Answer":"You have at least two options:\n\nTransform all your data into a categorical representation by computing percentiles for each continuous variables and then binning the continuous variables using the percentiles as bin boundaries. For instance for the height of a person create the following bins: \"very small\", \"small\", \"regular\", \"big\", \"very big\" ensuring that each bin contains approximately 20% of the population of your training set. We don't have any utility to perform this automatically in scikit-learn but it should not be too complicated to do it yourself. Then fit a unique multinomial NB on those categorical representation of your data.\nIndependently fit a gaussian NB model on the continuous part of the data and a multinomial NB model on the categorical part. Then transform all the dataset by taking the class assignment probabilities (with predict_proba method) as new features: np.hstack((multinomial_probas, gaussian_probas)) and then refit a new model (e.g. a new gaussian NB) on the new features.","Q_Score":76,"Tags":"python,machine-learning,data-mining,classification,scikit-learn","A_Id":14255284,"CreationDate":"2013-01-10T09:08:00.000","Title":"Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm wondering if there's a way to generate decreasing numbers within a certain range?\nI want to program to keep outputting until it reaches 0, and the highest number in the range must be positive.\nFor example, if the range is (0, 100), this could be a possible output:\n96\n57\n43\n23\n9\n0\nSorry for the confusion from my original post","AnswerCount":8,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":5706,"Q_Id":14260923,"Users Score":18,"Answer":"I would generate a list of n random numbers then sort them highest to lowest.","Q_Score":4,"Tags":"python,random,python-2.7,numbers","A_Id":14260955,"CreationDate":"2013-01-10T15:06:00.000","Title":"How to randomly generate decreasing numbers in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons.\nOne day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about \"big data\" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive.\nMy first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this:\nWhat are some best-practice workflows for accomplishing the following:\n\nLoading flat files into a permanent, on-disk database structure\nQuerying that database to retrieve data to feed into a pandas data structure\nUpdating the database after manipulating pieces in pandas\n\nReal-world examples would be much appreciated, especially from anyone who uses pandas on \"large data\".\nEdit -- an example of how I would like this to work:\n\nIteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory.\nIn order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory.\nI would create new columns by performing various operations on the selected columns.\nI would then have to append these new columns into the database structure.\n\nI am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem.\nEdit -- Responding to Jeff's questions specifically:\n\nI am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns.\nTypical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset.\nFinally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model.\nA typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case.\nIt's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations.\nThe modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns.\n\nIt is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics\/machine learning parlance).","AnswerCount":16,"Available Count":4,"Score":-0.024994793,"is_accepted":false,"ViewCount":341120,"Q_Id":14262433,"Users Score":-2,"Answer":"At the moment I am working \"like\" you, just on a lower scale, which is why I don't have a PoC for my suggestion.\nHowever, I seem to find success in using pickle as caching system and outsourcing execution of various functions into files - executing these files from my commando \/ main file; For example i use a prepare_use.py to convert object types, split a data set into test, validating and prediction data set.\nHow does your caching with pickle work?\nI use strings in order to access pickle-files that are dynamically created, depending on which parameters and data sets were passed (with that i try to capture and determine if the program was already run, using .shape for data set, dict for passed parameters). \nRespecting these measures, i get a String to try to find and read a .pickle-file and can, if found, skip processing time in order to jump to the execution i am working on right now.\nUsing databases I encountered similar problems, which is why i found joy in using this solution, however - there are many constraints for sure - for example storing huge pickle sets due to redundancy.\nUpdating a table from before to after a transformation can be done with proper indexing - validating information opens up a whole other book (I tried consolidating crawled rent data and stopped using a database after 2 hours basically - as I would have liked to jump back after every transformation process)\nI hope my 2 cents help you in some way.\nGreetings.","Q_Score":1156,"Tags":"python,mongodb,pandas,hdf5,large-data","A_Id":59647574,"CreationDate":"2013-01-10T16:20:00.000","Title":"\"Large data\" workflows using pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons.\nOne day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about \"big data\" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive.\nMy first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this:\nWhat are some best-practice workflows for accomplishing the following:\n\nLoading flat files into a permanent, on-disk database structure\nQuerying that database to retrieve data to feed into a pandas data structure\nUpdating the database after manipulating pieces in pandas\n\nReal-world examples would be much appreciated, especially from anyone who uses pandas on \"large data\".\nEdit -- an example of how I would like this to work:\n\nIteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory.\nIn order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory.\nI would create new columns by performing various operations on the selected columns.\nI would then have to append these new columns into the database structure.\n\nI am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem.\nEdit -- Responding to Jeff's questions specifically:\n\nI am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns.\nTypical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset.\nFinally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model.\nA typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case.\nIt's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations.\nThe modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns.\n\nIt is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics\/machine learning parlance).","AnswerCount":16,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":341120,"Q_Id":14262433,"Users Score":21,"Answer":"One more variation\nMany of the operations done in pandas can also be done as a db query (sql, mongo)\nUsing a RDBMS or mongodb allows you to perform some of the aggregations in the DB Query (which is optimized for large data, and uses cache and indexes efficiently)\nLater, you can perform post processing using pandas.\nThe advantage of this method is that you gain the DB optimizations for working with large data, while still defining the logic in a high level declarative syntax - and not having to deal with the details of deciding what to do in memory and what to do out of core.\nAnd although the query language and pandas are different, it's usually not complicated to translate part of the logic from one to another.","Q_Score":1156,"Tags":"python,mongodb,pandas,hdf5,large-data","A_Id":29910919,"CreationDate":"2013-01-10T16:20:00.000","Title":"\"Large data\" workflows using pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons.\nOne day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about \"big data\" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive.\nMy first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this:\nWhat are some best-practice workflows for accomplishing the following:\n\nLoading flat files into a permanent, on-disk database structure\nQuerying that database to retrieve data to feed into a pandas data structure\nUpdating the database after manipulating pieces in pandas\n\nReal-world examples would be much appreciated, especially from anyone who uses pandas on \"large data\".\nEdit -- an example of how I would like this to work:\n\nIteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory.\nIn order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory.\nI would create new columns by performing various operations on the selected columns.\nI would then have to append these new columns into the database structure.\n\nI am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem.\nEdit -- Responding to Jeff's questions specifically:\n\nI am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns.\nTypical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset.\nFinally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model.\nA typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case.\nIt's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations.\nThe modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns.\n\nIt is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics\/machine learning parlance).","AnswerCount":16,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":341120,"Q_Id":14262433,"Users Score":167,"Answer":"I think the answers above are missing a simple approach that I've found very useful. \nWhen I have a file that is too large to load in memory, I break up the file into multiple smaller files (either by row or cols)\nExample: In case of 30 days worth of trading data of ~30GB size, I break it into a file per day of ~1GB size. I subsequently process each file separately and aggregate results at the end\nOne of the biggest advantages is that it allows parallel processing of the files (either multiple threads or processes)\nThe other advantage is that file manipulation (like adding\/removing dates in the example) can be accomplished by regular shell commands, which is not be possible in more advanced\/complicated file formats\nThis approach doesn't cover all scenarios, but is very useful in a lot of them","Q_Score":1156,"Tags":"python,mongodb,pandas,hdf5,large-data","A_Id":20690383,"CreationDate":"2013-01-10T16:20:00.000","Title":"\"Large data\" workflows using pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons.\nOne day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about \"big data\" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive.\nMy first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this:\nWhat are some best-practice workflows for accomplishing the following:\n\nLoading flat files into a permanent, on-disk database structure\nQuerying that database to retrieve data to feed into a pandas data structure\nUpdating the database after manipulating pieces in pandas\n\nReal-world examples would be much appreciated, especially from anyone who uses pandas on \"large data\".\nEdit -- an example of how I would like this to work:\n\nIteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory.\nIn order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory.\nI would create new columns by performing various operations on the selected columns.\nI would then have to append these new columns into the database structure.\n\nI am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem.\nEdit -- Responding to Jeff's questions specifically:\n\nI am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns.\nTypical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset.\nFinally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model.\nA typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case.\nIt's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations.\nThe modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns.\n\nIt is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics\/machine learning parlance).","AnswerCount":16,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":341120,"Q_Id":14262433,"Users Score":72,"Answer":"If your datasets are between 1 and 20GB, you should get a workstation with 48GB of RAM. Then Pandas can hold the entire dataset in RAM. I know its not the answer you're looking for here, but doing scientific computing on a notebook with 4GB of RAM isn't reasonable.","Q_Score":1156,"Tags":"python,mongodb,pandas,hdf5,large-data","A_Id":19739768,"CreationDate":"2013-01-10T16:20:00.000","Title":"\"Large data\" workflows using pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a large project that does SPC analysis and have 1000's of different unrelated dataframe objects. Does anyone know of a module for storing objects in memory? I could use a python dictionary but would like it more elaborate and functional mechanisms like locking, thread safe, who has it and a waiting list etc? I was thinking of creating something that behaves like my local public library system. The way it checks in and out books to one owner ...etc.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1701,"Q_Id":14270163,"Users Score":1,"Answer":"Redis with redis-py is one solution. Redis is really fast and there are nice Python bindings. Pytables, as mentioned above, is a good choice as well. PyTables is HDF5, and is really really fast.","Q_Score":1,"Tags":"python,object,pandas,dataframe,storage","A_Id":14271696,"CreationDate":"2013-01-11T01:18:00.000","Title":"Pandas storing 1000's of dataframe objects","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the Scikit-learn Extremely Randomized Trees algorithm to get info about the relative feature importances and I have a question about how \"redundant features\" are ranked.\nIf I have two features that are identical (redundant) and important to the classification, the extremely randomized trees cannot detect the redundancy of the features. That is, both features get a high ranking. Is there any other way to detect that two features are actualy redundant?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":203,"Q_Id":14304420,"Users Score":0,"Answer":"Maybe you could extract the top n important features and then compute pairwise Spearman's or Pearson's correlations for those in order to detect redundancy only for the top informative features as it might not be feasible to compute all pairwise feature correlations (quadratic with the number of features).\nThere might be more clever ways to do the same by exploiting the statistics of the relative occurrences of the features as nodes in the decision trees though.","Q_Score":1,"Tags":"python-2.7,scikit-learn","A_Id":14309807,"CreationDate":"2013-01-13T14:27:00.000","Title":"Feature importance based on extremely randomize trees and feature redundancy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I read in an large python array from a csv file (20332 *17009) using window7 64 bit OS machine with 12 G ram. The array has values in the half of places, like the example below. I only need the array where has values for analysis, rather than the whole array. \n[0 0 0 0 0 0\n0 0 0 3 8 0\n0 4 2 7 0 0\n0 0 5 2 0 0\n0 0 1 0 0 0]\nI am wondering: is it possible to ignore 0 value for analysis and save more memory?\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":733,"Q_Id":14308889,"Users Score":2,"Answer":"Given your description, a sparse representation may not be very useful to you. There are many other options, though: \n\nMake sure your values are represented using the smallest data type possible. The example you show above is best represented as single-byte integers. Reading into a numpy array or python array will give you good control over data type.\nYou can trade memory for performance by only reading a part of the data at a time. If you re-write the entire dataset as binary instead of CSV, then you can use mmap to access the file as if it were already in memory (this would also make it faster to read and write).\nIf you really need the entire dataset in memory (and it really doesn't fit), then some sort of compression may be necessary. Sparse matrices are an option (as larsmans mentioned in the comments, both scipy and pandas have sparse matrix implementations), but these will only help if the fraction of zero-value entries is large. Better compression options will depend on the nature of your data. Consider breaking up the array into chunks and compressing those with a fast compression algorithm like RLE, SZIP, etc.","Q_Score":0,"Tags":"python,arrays","A_Id":14309992,"CreationDate":"2013-01-13T22:23:00.000","Title":"How to save memory for a large python array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In short ... I have a Python Pandas data frame that is read in from an Excel file using 'read_table'. I would like to keep a handful of the series from the data, and purge the rest. I know that I can just delete what I don't want one-by-one using 'del data['SeriesName']', but what I'd rather do is specify what to keep instead of specifying what to delete.\nIf the simplest answer is to copy the existing data frame into a new data frame that only contains the series I want, and then delete the existing frame in its entirety, I would satisfied with that solution ... but if that is indeed the best way, can someone walk me through it?\nTIA ... I'm a newb to Pandas. :)","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":30755,"Q_Id":14363640,"Users Score":1,"Answer":"You can also specify a list of columns to keep with the usecols option in pandas.read_table. This speeds up the loading process as well.","Q_Score":18,"Tags":"python,pandas","A_Id":44592825,"CreationDate":"2013-01-16T16:57:00.000","Title":"Python Pandas - Deleting multiple series from a data frame in one command","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I do data mining research and often have Python scripts that load large datasets from SQLite databases, CSV files, pickle files, etc. In the development process, my scripts often need to be changed and I find myself waiting 20 to 30 seconds waiting for data to load. \nLoading data streams (e.g. from a SQLite database) sometimes works, but not in all situations -- if I need to go back into a dataset often, I'd rather pay the upfront time cost of loading the data.\nMy best solution so far is subsampling the data until I'm happy with my final script. Does anyone have a better solution\/design practice?\nMy \"ideal\" solution would involve using the Python debugger (pdb) cleverly so that the data remains loaded in memory, I can edit my script, and then resume from a given point.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":806,"Q_Id":14369696,"Users Score":0,"Answer":"Write a script that does the selects, the object-relational conversions, then pickles the data to a local file.\nYour development script will start by unpickling the data and proceeding.\nIf the data is significantly smaller than physical RAM, you can memory map a file shared between two processes, and write the pickled data to memory.","Q_Score":3,"Tags":"python,performance,data-mining,pdb,large-data","A_Id":14369860,"CreationDate":"2013-01-16T23:21:00.000","Title":"How do I make large datasets load quickly in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I do data mining research and often have Python scripts that load large datasets from SQLite databases, CSV files, pickle files, etc. In the development process, my scripts often need to be changed and I find myself waiting 20 to 30 seconds waiting for data to load. \nLoading data streams (e.g. from a SQLite database) sometimes works, but not in all situations -- if I need to go back into a dataset often, I'd rather pay the upfront time cost of loading the data.\nMy best solution so far is subsampling the data until I'm happy with my final script. Does anyone have a better solution\/design practice?\nMy \"ideal\" solution would involve using the Python debugger (pdb) cleverly so that the data remains loaded in memory, I can edit my script, and then resume from a given point.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":806,"Q_Id":14369696,"Users Score":0,"Answer":"Jupyter notebook allows you to load a large data set into a memory resident data structure, such as a Pandas dataframe in one cell. Then you can operate on that data structure in subsequent cells without having to reload the data.","Q_Score":3,"Tags":"python,performance,data-mining,pdb,large-data","A_Id":63300344,"CreationDate":"2013-01-16T23:21:00.000","Title":"How do I make large datasets load quickly in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attempting to fill holes in a binary image. The image is rather large so I have broken it into chunks for processing.\nWhen I use the scipy.ndimage.morphology.binary_fill_holes functions, it fills larger holes that belong in the image. So I tried using scipy.ndimage.morphology.binary_closing, which gave the desired results of filling small holes in the image. However, when I put the chunks back together, to create the entire image, I end up with seamlines because the binary_closing function removes any values from the border pixels of each chunk.\nIs there any way to avoid this effect?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2670,"Q_Id":14385921,"Users Score":1,"Answer":"Operations that involve information from neighboring pixels, such as closing will always have trouble at the edges. In your case, this is very easy to get around: just process subimages that are slightly larger than your tiling, and keep the good parts when stitching together.","Q_Score":6,"Tags":"python,image,image-processing,numpy,scipy","A_Id":14386145,"CreationDate":"2013-01-17T18:42:00.000","Title":"Scipy Binary Closing - Edge Pixels lose value","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to compress a huge python object ~15G, and save it on the disk. Due to requrement constraints I need to compress this file as much as possible. I am presently using zlib.compress(9). My main concern is the memory taken exceeds what I have available on the system 32g during compression, and going forward the size of the object is expected to increase. Is there a more efficient\/better way to achieve this.\nThanks.\nUpdate: Also to note the object that I want to save is a sparse numpy matrix, and that I am serializing the data before compressing, which also increases the memory consumption. Since I do not need the python object after it is serialized, would gc.collect() help?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1086,"Q_Id":14389279,"Users Score":5,"Answer":"Incremental (de)compression should be done with zlib.{de,}compressobj() so that memory consumption can be minimized. Additionally, higher compression ratios can be attained for most data by using bz2 instead.","Q_Score":1,"Tags":"python,memory,numpy,compression","A_Id":14389347,"CreationDate":"2013-01-17T22:26:00.000","Title":"Compress large python objects","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using PyML's SVM to classify reads, but would like to set the discriminant to a higher value than the default (which I assume is 0). How do I do it?\nPs. I'm using a linear kernel with the liblinear-optimizer if that matters.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":80,"Q_Id":14396632,"Users Score":1,"Answer":"Roundabout way of doing it below:\nUse the result.getDecisionFunction() method and choose according to your own preference. \nReturns a list of values like:\n[-1.0000000000000213, -1.0000000000000053, -0.9999999999999893]\nBetter answers still appreciated.","Q_Score":0,"Tags":"python,svm,pyml","A_Id":14396884,"CreationDate":"2013-01-18T10:13:00.000","Title":"Setting the SVM discriminant value in PyML","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to set up a slightly customised version of Spyder. When Spyder starts, it automatically imports a long list of modules, including things from matplotlib, numpy, scipy etc. Is there a way to add my own modules to that list?\nIn case it makes a difference, I'm using the Spyder configuration provided by the Python(X,Y) Windows installer.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":15729,"Q_Id":14400993,"Users Score":2,"Answer":"The startup script for Spyder is in site-packages\/spyderlib\/scientific_startup.py.\nCarlos' answer would also work, but this is what I was looking for.","Q_Score":8,"Tags":"python,import,module,spyder","A_Id":15023264,"CreationDate":"2013-01-18T14:27:00.000","Title":"Spyder default module import list","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to set up a slightly customised version of Spyder. When Spyder starts, it automatically imports a long list of modules, including things from matplotlib, numpy, scipy etc. Is there a way to add my own modules to that list?\nIn case it makes a difference, I'm using the Spyder configuration provided by the Python(X,Y) Windows installer.","AnswerCount":3,"Available Count":2,"Score":-0.1325487884,"is_accepted":false,"ViewCount":15729,"Q_Id":14400993,"Users Score":-2,"Answer":"If Spyder is executed as a python script by python binary, then you should be able to simply edit Spyder python sources and include the modules you need. You should take a look into how is it actually executed upon start.","Q_Score":8,"Tags":"python,import,module,spyder","A_Id":14401134,"CreationDate":"2013-01-18T14:27:00.000","Title":"Spyder default module import list","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can we load a pandas DataFrame in .NET space using iron python? If not I am thinking of converting pandas df into a csv file and then reading in .net space.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":8863,"Q_Id":14432059,"Users Score":10,"Answer":"No, Pandas is pretty well tied to CPython. Like you said, your best bet is to do the analysis in CPython with Pandas and export the result to CSV.","Q_Score":10,"Tags":"python,.net,pandas,ironpython,python.net","A_Id":14443106,"CreationDate":"2013-01-21T03:21:00.000","Title":"Can we load pandas DataFrame in .NET ironpython?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Gensim python toolkit to build tf-idf model for documents. So I need to create a dictionary for all documents first. However, I found Gensim does not use stemming before creating the dictionary and corpus. Am I right ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":940,"Q_Id":14468078,"Users Score":0,"Answer":"I was also struggling with the same case. To overcome i first stammed documents using NLTK and later processed it with gensim. Probably it can be a easier and handy way to perform your task.","Q_Score":2,"Tags":"python,nlp,gensim","A_Id":35618939,"CreationDate":"2013-01-22T21:11:00.000","Title":"Is stemming used when gensim creates a dictionary for tf-idf model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create a matrix containing 2 708 000 000 elements. When I try to create a numpy array of this size it gives me a value error. Is there any way I can increase the maximum array size?\na=np.arange(2708000000)\nValueError Traceback (most recent call last)\nValueError: Maximum allowed size exceeded","AnswerCount":4,"Available Count":2,"Score":-1.0,"is_accepted":false,"ViewCount":82332,"Q_Id":14525344,"Users Score":-9,"Answer":"It is indeed related to the system maximum address length, to say it simply, 32-bit system or 64-bit system. Here is an explanation for these questions, originally from Mark Dickinson\nShort answer: the Python object overhead is killing you. In Python 2.x on a 64-bit machine, a list of strings consumes 48 bytes per list entry even before accounting for the content of the strings. That's over 8.7 Gb of overhead for the size of array you describe. On a 32-bit machine it'll be a bit better: only 28 bytes per list entry.\nLonger explanation: you should be aware that Python objects themselves can be quite large: even simple objects like ints, floats and strings. In your code you're ending up with a list of lists of strings. On my (64-bit) machine, even an empty string object takes up 40 bytes, and to that you need to add 8 bytes for the list pointer that's pointing to this string object in memory. So that's already 48 bytes per entry, or around 8.7 Gb. Given that Python allocates memory in multiples of 8 bytes at a time, and that your strings are almost certainly non-empty, you're actually looking at 56 or 64 bytes (I don't know how long your strings are) per entry.\nPossible solutions:\n(1) You might do (a little) better by converting your entries from strings to ints or floats as appropriate.\n(2) You'd do much better by either using Python's array type (not the same as list!) or by using numpy: then your ints or floats would only take 4 or 8 bytes each.\nSince Python 2.6, you can get basic information about object sizes with the sys.getsizeof function. Note that if you apply it to a list (or other container) then the returned size doesn't include the size of the contained list objects; only of the structure used to hold those objects. Here are some values on my machine.","Q_Score":30,"Tags":"arrays,numpy,python-2.7,size,max","A_Id":23838980,"CreationDate":"2013-01-25T15:55:00.000","Title":"What's the maximum size of a numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create a matrix containing 2 708 000 000 elements. When I try to create a numpy array of this size it gives me a value error. Is there any way I can increase the maximum array size?\na=np.arange(2708000000)\nValueError Traceback (most recent call last)\nValueError: Maximum allowed size exceeded","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":82332,"Q_Id":14525344,"Users Score":19,"Answer":"You're trying to create an array with 2.7 billion entries. If you're running 64-bit numpy, at 8 bytes per entry, that would be 20 GB in all.\nSo almost certainly you just ran out of memory on your machine. There is no general maximum array size in numpy.","Q_Score":30,"Tags":"arrays,numpy,python-2.7,size,max","A_Id":14525604,"CreationDate":"2013-01-25T15:55:00.000","Title":"What's the maximum size of a numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So far I have been using python to generate permutations of matrices for finding magic squares. So what I have been doing so far (for 3x3 matrices) is that I find all the possible permutations of the set {1,2,3,4,5,6,7,8,9} using itertools.permutations, store them as a list and do my calculations and print my results.\nNow I want to find out magic squares of order 4. Since finding all permutations means 16! possibilities, I want to increase efficiency by placing likely elements in the corner, for example 1, 16 on diagonal one corners and 4, 13 on diagonal two corners.\nSo how would I find permutations of set {1,2....16} where some elements are not moved is my question","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2502,"Q_Id":14535650,"Users Score":1,"Answer":"Just pull the placed numbers out of the permutation set. Then insert them into their proper position in the generated permutations.\nFor your example you'd take out 1, 16, 4, 13. Permute on (2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15), for each permutation, insert 1, 16, 4, 13 where you have pre-selected to place them.","Q_Score":0,"Tags":"python,math,matrix,permutation,itertools","A_Id":14535721,"CreationDate":"2013-01-26T09:36:00.000","Title":"How would I go about finding all possible permutations of a 4x4 matrix with static corner elements?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to go through every row in a DataFrame index and remove all rows that are not between a certain time.\nI have been looking for solutions but none of them separate the Date from the Time, and all I want to do is drop the rows that are outside of a Time range.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":19605,"Q_Id":14539992,"Users Score":3,"Answer":"You can also do:\n\n\ufeffrng = pd.date_range('1\/1\/2000', periods=24, freq='H')\nts = pd.Series(pd.np.random.randn(len(rng)), index=rng)\nts.ix[datetime.time(10):datetime.time(14)]\nOut[4]: \n2000-01-01 10:00:00 -0.363420\n2000-01-01 11:00:00 -0.979251\n2000-01-01 12:00:00 -0.896648\n2000-01-01 13:00:00 -0.051159\n2000-01-01 14:00:00 -0.449192\nFreq: H, dtype: float64\n\nDataFrame works same way.","Q_Score":17,"Tags":"python,pandas","A_Id":17913278,"CreationDate":"2013-01-26T18:15:00.000","Title":"Pandas Drop Rows Outside of Time Range","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know how I can solve for a root in python using scipy.optimize.fsolve. \nI have a function defined \nf = lambda : -1*numpy.exp(-x**2) and I want to solve for x setting the function to a certain nonzero. For instance, I want to solve for x using f(x) = 5. \nIs there a way to do this with fsolve or would I need to use another tool in scipy? In other words, I'm looking for something analogous to Maple's fsolve.","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":380,"Q_Id":14544838,"Users Score":4,"Answer":"This is easy if you change your definition of f(x). e.g. if you want f(x) = 5, define your function: g(x) = f(x) - 5 = 0","Q_Score":0,"Tags":"python,scipy,root,solver","A_Id":14544871,"CreationDate":"2013-01-27T05:55:00.000","Title":"find a value other than a root with fsolve in python's scipy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"For a lot of functions, it is possible to use either native Python or numpy to proceed. \nThis is the case for math functions, that are available with Python native import math, but also with numpy methods. This is also the case when it comes to arrays, with narray from numpy and pythons list comprehensions, or tuples. \nI have two questions relative to these features that are in Python and also in numpy\n\nin general, if method is available in native Python AND numpy, which of both solutions would you prefer ? In terms of efficiency ? Is it different and how Python and numpy would differ in their proceeding?\nMore particularly, regarding arrays, and basic functions that are dealing with arrays, like sort, concatenate..., which solution is more efficient ? What makes the efficiency of the most efficient solution?\n\nThis is very open and general question. I guess that will not impact my code greatly, but I am just wondering.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1039,"Q_Id":14545602,"Users Score":4,"Answer":"In general, it probably matters most (efficiency-wise) to avoid conversions between the two. If you're mostly using non-numpy functions on data, then they'll be internally operating using standard Python data types, and thus using numpy arrays would be inefficient due to the need to convert back and forth.\nSimilarly, if you're using a lot of numpy functions to manipulate data, transforming it all back to basic Python types in between would also be inefficient.\n\nAs far as choosing functions goes, use whichever one was designed to operate on the form your data is already in - e.g. if you already have a numpy array, use the numpy functions on it; similarly, if you have a basic Python data type, use the Python functions on it. numpy's functions are going to be optimized for working with numpy's data types.","Q_Score":3,"Tags":"python,performance,numpy","A_Id":14545631,"CreationDate":"2013-01-27T08:11:00.000","Title":"numpy vs native Python - most efficient way","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For a lot of functions, it is possible to use either native Python or numpy to proceed. \nThis is the case for math functions, that are available with Python native import math, but also with numpy methods. This is also the case when it comes to arrays, with narray from numpy and pythons list comprehensions, or tuples. \nI have two questions relative to these features that are in Python and also in numpy\n\nin general, if method is available in native Python AND numpy, which of both solutions would you prefer ? In terms of efficiency ? Is it different and how Python and numpy would differ in their proceeding?\nMore particularly, regarding arrays, and basic functions that are dealing with arrays, like sort, concatenate..., which solution is more efficient ? What makes the efficiency of the most efficient solution?\n\nThis is very open and general question. I guess that will not impact my code greatly, but I am just wondering.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1039,"Q_Id":14545602,"Users Score":1,"Answer":"When there's a choice between working with NumPy array and numeric lists, the former are typically faster.\nI don't quite understand the second question, so won't try to address it.","Q_Score":3,"Tags":"python,performance,numpy","A_Id":14545646,"CreationDate":"2013-01-27T08:11:00.000","Title":"numpy vs native Python - most efficient way","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to make computations in a python program, and I would prefer to make some of them in R. Is it possible to embed R code in python ?","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":13823,"Q_Id":14551472,"Users Score":3,"Answer":"When I need to do R calculations, I usually write R scripts, and run them from Python using the subprocess module. The reason I chose to do this was because the version of R I had installed (2.16 I think) wasn't compatible with RPy at the time (which wanted 2.14).\nSo if you already have your R installation \"just the way you want it\", this may be a better option.","Q_Score":13,"Tags":"python,r","A_Id":14552819,"CreationDate":"2013-01-27T19:48:00.000","Title":"Embed R code in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I learned, that neural networks can replicate any function. \nNormally the neural network is fed with a set of descriptors to its input neurons and then gives out a certain score at its output neuron. I want my neural network to recognize certain behaviours from a screen. Objects on the screen are already preprocessed and clearly visible, so recognition should not be a problem. \nIs it possible to use the neural network to recognize a pixelated picture of the screen and make decisions on that basis? The amount of training data would be huge of course. Is there way to teach the ANN by online supervised learning?\nEdit:\nBecause a commenter said the programming problem would be too general: \nI would like to implement this in python first, to see if it works. If anyone could point me to a resource where i could do this online-learning thing with python, i would be grateful.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":888,"Q_Id":14559547,"Users Score":0,"Answer":"This is not entirely correct.\nA 3-layer feedforward MLP can theoretically replicate any CONTINUOUS function.\nIf there are discontinuities, then you need a 4th layer. \nSince you are dealing with pixelated screens and such, you probably would need to consider a fourth layer. \nFinally, if you are looking at circular shapes, etc., than a radial basis function (RBF) network may be more suitable.","Q_Score":0,"Tags":"python,neural-network,image-recognition,online-algorithm","A_Id":14756113,"CreationDate":"2013-01-28T10:00:00.000","Title":"Can a neural network recognize a screen and replicate a finite set of actions?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats.\nOne thing that I know is those with scikits namespace are specific \"branches\" of scipy, and what used to be scikits.statsmodels is now called statsmodels. On the other hand there is also scipy.stats. What are the differences between the two, and which one is the statistics package for Python?\nThanks.\n--EDIT--\nI changed the title because some answers are not really related to the question, and I suppose that's because the title is not clear enough.","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":16292,"Q_Id":14573728,"Users Score":38,"Answer":"Statsmodels has scipy.stats as a dependency. Scipy.stats has all of the probability distributions and some statistical tests. It's more like library code in the vein of numpy and scipy. Statsmodels on the other hand provides statistical models with a formula framework similar to R and it works with pandas DataFrames. There are also statistical tests, plotting, and plenty of helper functions in statsmodels. Really it depends on what you need, but you definitely don't have to choose one. They have different aims and strengths.","Q_Score":26,"Tags":"python,scipy,scikits,statsmodels","A_Id":14575243,"CreationDate":"2013-01-29T00:28:00.000","Title":"Python statistics package: difference between statsmodel and scipy.stats","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats.\nOne thing that I know is those with scikits namespace are specific \"branches\" of scipy, and what used to be scikits.statsmodels is now called statsmodels. On the other hand there is also scipy.stats. What are the differences between the two, and which one is the statistics package for Python?\nThanks.\n--EDIT--\nI changed the title because some answers are not really related to the question, and I suppose that's because the title is not clear enough.","AnswerCount":3,"Available Count":3,"Score":-0.0665680765,"is_accepted":false,"ViewCount":16292,"Q_Id":14573728,"Users Score":-1,"Answer":"I think THE statistics package is numpy\/scipy. It works also great if you want to plot your data using matplotlib. \nHowever, as far as I know, matplotlib doesn't work with Python 3.x yet.","Q_Score":26,"Tags":"python,scipy,scikits,statsmodels","A_Id":14574087,"CreationDate":"2013-01-29T00:28:00.000","Title":"Python statistics package: difference between statsmodel and scipy.stats","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats.\nOne thing that I know is those with scikits namespace are specific \"branches\" of scipy, and what used to be scikits.statsmodels is now called statsmodels. On the other hand there is also scipy.stats. What are the differences between the two, and which one is the statistics package for Python?\nThanks.\n--EDIT--\nI changed the title because some answers are not really related to the question, and I suppose that's because the title is not clear enough.","AnswerCount":3,"Available Count":3,"Score":0.3215127375,"is_accepted":false,"ViewCount":16292,"Q_Id":14573728,"Users Score":5,"Answer":"I try to use pandas\/statsmodels\/scipy for my work on a day-to-day basis, but sometimes those packages come up a bit short (LOESS, anybody?). The problem with the RPy module is (last I checked, at least) that it wants a specific version of R that isn't current---my R installation is 2.16 (I think) and RPy wanted 2.14. So either you have to have two parallel installations of R, or you have to downgrade. (If you don't have R installed, then you can just install the correct version of R and use RPy.)\nSo when I need something that isn't in pandas\/statsmodels\/scipy I write R scripts, and run them with the subprocess module. This lets me interact with R as little as possible (which I really don't like programming in), but I can still leverage all the stuff that R has that the Python packages don't.\nThe lesson is that there isn't ever one solution to any problem---you have to assemble a whole bunch of parts that are all useful to you (and maybe write some of your own), in a way that you understand, to solve problems. (R aficionados will disagree, of course!)","Q_Score":26,"Tags":"python,scipy,scikits,statsmodels","A_Id":14575672,"CreationDate":"2013-01-29T00:28:00.000","Title":"Python statistics package: difference between statsmodel and scipy.stats","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an IPython Notebook that is using Pandas to back-test a rule-based trading system.\nI have a function that accepts various scalars and functions as parameters and outputs a stats pack as some tables and a couple of plots.\nFor automation, I want to be able to format this nicely into a \"page\" and then call the function in a loop while varying the inputs and have it output a number of pages for comparison, all from a single notebook cell.\nThe approach I am taking is to create IpyTables and then call _repr_html_(), building up the HTML output along the way so that I can eventually return it from the function that runs the loop.\nHow can I capture the output of the plots this way - matplotlib subplot objects don't seem to implement _repr_html_()?\nFeel free to suggest another approach entirely that you think might equally solve the problem.\nTIA","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2790,"Q_Id":14580684,"Users Score":1,"Answer":"Ok, if you go that route, this answer stackoverflow.com\/a\/5314808\/243434 on how to capture >matplotlib figures as inline PNGs may help \u2013 @crewbum\nTo prevent duplication of plots, try running with pylab disabled (double-check your config >files and the command line). \u2013 @crewbum\n\n--> this last requires a restart of the notebook: ipython notebook --pylab (NB no inline)","Q_Score":4,"Tags":"python,pandas,matplotlib,jupyter-notebook","A_Id":14600682,"CreationDate":"2013-01-29T10:25:00.000","Title":"How to grab matplotlib plot as html in ipython notebook?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"SciPy\/Numpy seems to support many filters, but not the root-raised cosine filter. Is there a trick to easily create one rather than calculating the transfer function? An approximation would be fine as well.","AnswerCount":6,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":13131,"Q_Id":14614966,"Users Score":-2,"Answer":"SciPy will support any filter. Just calculate the impulse response and use any of the appropriate scipy.signal filter\/convolve functions.","Q_Score":7,"Tags":"python,numpy,scipy,signal-processing","A_Id":15783554,"CreationDate":"2013-01-30T22:20:00.000","Title":"Easy way to implement a Root Raised Cosine (RRC) filter using Python & Numpy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm about to write my very own scaling, rotation, normalization functions in python. Is there a convenient way to avoid this? I found NumPy, but it kind-a seems like an overkill for my little 2D-needs.\nAre there basic vector operations available in the std python libs?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1480,"Q_Id":14635549,"Users Score":3,"Answer":"No, the standard in numpy. I wouldn't think of it as overkill, think of it as a very well written and tested library, even if you do just need a small portion of it. All the basic vector & matrix operations are implemented efficiently (falling back to C and Fortan) which makes it fast and memory efficient. Don't make your own, use numpy.","Q_Score":0,"Tags":"python,math,vector-graphics","A_Id":14635675,"CreationDate":"2013-01-31T21:33:00.000","Title":"Python vector transformation (normalize, scale, rotate etc.)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a data set which I do multiple mappings on. \nAssuming that I have 3 key-values pair for the reduce function, how do I modify the output such that I have 3 blobfiles - one for each of the key value pair?\nDo let me know if I can clarify further.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":139,"Q_Id":14635693,"Users Score":2,"Answer":"I don't think such functionality exists (yet?) in the GAE Mapreduce library.\nDepending on the size of your dataset, and the type of output required, you can small-time-investment hack your way around it by co-opting the reducer as another output writer. For example, if one of the reducer outputs should go straight back to the datastore, and another output should go to a file, you could open a file yourself and write the outputs to it. Alternatively, you could serialize and explicitly store the intermediate map results to a temporary datastore using operation.db.Put, and perform separate Map or Reduce jobs on that datastore. Of course, that will end up being more expensive than the first workaround.\nIn your specific key-value example, I'd suggest writing to a Google Cloud Storage File, and postprocessing it to split it into three files as required. That'll also give you more control over final file names.","Q_Score":0,"Tags":"python,google-app-engine,mapreduce","A_Id":20688782,"CreationDate":"2013-01-31T21:42:00.000","Title":"GAE MapReduce, How to write Multiple Outputs","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i tried searching stackoverflow for the tags [a-star] [and] [python] and [a-star] [and] [numpy], but nothing. i also googled it but whether due to the tokenizing or its existence, i got nothing.\nit's not much harder than your coding-interview tree traversals to implement. but, it would be nice to have a correct efficient implementation for everyone.\ndoes numpy have A*?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":15316,"Q_Id":14636918,"Users Score":1,"Answer":"No, there is no A* search in Numpy.","Q_Score":11,"Tags":"python,numpy,a-star","A_Id":42075989,"CreationDate":"2013-01-31T23:16:00.000","Title":"A-star search in numpy or python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution. \nI do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) random variate and compare it to the CDF get a random variate from my distribution, but the parameters in the distributions are always changing. Using this method is too slow. \nSo I guess my question has two parts:\n\nIs there a method\/algorithm to quickly generate finite, discrete random variates without using the CDF?\nIs there a Python module and\/or a C++ library which already has this functionality?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":374,"Q_Id":14655681,"Users Score":0,"Answer":"Acceptance\\Rejection:\nFind a function that is always higher than the pdf. Generate 2 Random variates. The first one you scale to calculate the value, the second you use to decide whether to accept or reject the choice. Rinse and repeat until you accept a value.\nSorry I can't be more specific, but I haven't done it for a while..\nIts a standard algorithm, but I'd personally implement it from scratch, so I'm not aware of any implementations.","Q_Score":0,"Tags":"c++,python,algorithm,random,montecarlo","A_Id":14655846,"CreationDate":"2013-02-01T21:55:00.000","Title":"Is there a way to generate a random variate from a non-standard distribution without computing CDF?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution. \nI do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) random variate and compare it to the CDF get a random variate from my distribution, but the parameters in the distributions are always changing. Using this method is too slow. \nSo I guess my question has two parts:\n\nIs there a method\/algorithm to quickly generate finite, discrete random variates without using the CDF?\nIs there a Python module and\/or a C++ library which already has this functionality?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":374,"Q_Id":14655681,"Users Score":0,"Answer":"Indeed acceptance\/rejection is the way to go if you know analytically your pdf. Let's call it f(x). Find a pdf g(x) such that there exist a constant c, such that c.g(x) > f(x), and such that you know how to simulate a variable with pdf g(x) - For example, as you work with a distribution with a finite support, a uniform will do: g(x) = 1\/(size of your domain) over the domain.\nThen draw a couple (G, U) such that G is simulated with pdf g(x), and U is uniform on [0, c.g(G)]. Then, if U < f(G), accept U as your variable. Otherwise draw again. The U you will finally accept will have f as a pdf.\nNote that the constant c determines the efficiency of the method. The smaller c, the most efficient you will be - basically you will need on average c drawings to get the right variable. Better get a function g simple enough (don't forget you need to draw variables using g as a pdf) but will the smallest possible c.","Q_Score":0,"Tags":"c++,python,algorithm,random,montecarlo","A_Id":14657373,"CreationDate":"2013-02-01T21:55:00.000","Title":"Is there a way to generate a random variate from a non-standard distribution without computing CDF?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution. \nI do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) random variate and compare it to the CDF get a random variate from my distribution, but the parameters in the distributions are always changing. Using this method is too slow. \nSo I guess my question has two parts:\n\nIs there a method\/algorithm to quickly generate finite, discrete random variates without using the CDF?\nIs there a Python module and\/or a C++ library which already has this functionality?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":374,"Q_Id":14655681,"Users Score":0,"Answer":"If acceptance rejection is also too inefficient you could also try some Markov Chain MC method, they generate a sequence of samples each one dependent on the previous one, so by skipping blocks of them one can subsample obtaining a more or less independent set. They only need the PDF, or even just a multiple of it. Usually they work with fixed distributions, but can also be adapted to slowly changing ones.","Q_Score":0,"Tags":"c++,python,algorithm,random,montecarlo","A_Id":18890513,"CreationDate":"2013-02-01T21:55:00.000","Title":"Is there a way to generate a random variate from a non-standard distribution without computing CDF?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need some help taking data from a .txt file and putting it into an array. I have a very rudimentary understanding of Python, and I have read through the documentation sited in threads relevant to my problem, but after hours of attempting to do this I still have not been able to get anywhere. The data in my file looks like this:\n\n\n0.000000000000000000e+00 7.335686114232199684e-02\n1.999999999999999909e-07 7.571960558042964973e-01\n3.999999999999999819e-07 9.909475704320810374e-01\n5.999999999999999728e-07 3.412754086075696081e-01\n\n\nI used numpy.genfromtxt, but got the following output: array(nan)\nCould you tell me what the proper way to do this is?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":132,"Q_Id":14665379,"Users Score":0,"Answer":"numpy.loadtxt() is the function you are looking for. This returns a two-dimenstional array.","Q_Score":0,"Tags":"python,numpy","A_Id":14665480,"CreationDate":"2013-02-02T19:03:00.000","Title":"Taking data from a text file and putting it into a numpy array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ive got this big\/easy problem that i need to solve but i cant..\nWhat im trying to do is to count cars on a highway, and i actually can detect the moving cars and put bounding boxes on them... but when i try to count them, i simply cant. I tried making a variable (nCars) and increment everytime the program creates a bounding box, but that seems to increment to many times.. \nThe question is: Whats the best way to count moving cars\/objects?\nPS: I Dont know if this is a silly question but im going nutts.... Thanks for everything (:\nAnd im new here but i know this website for some time (: Its great!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1748,"Q_Id":14677763,"Users Score":1,"Answer":"I guess you are detecting the cars in each frame and creating a new bounding box each time a car is detected. This would explain the many increments of your variable.\nYou have to find a way to figure out if the car detected in one frame is the same car from the frame before (if you had a car detected in the previous frame). You might be able to achieve this by simply comparing the bounding box distances between two frames; if the distance is less than a threshold value, you can say that it's the same car from the previous frame. This way you can track the cars.\nYou could increment the counter variable when the detected car leaves the camera's field of view (exits the frame).\nThe tracking procedure I proposed here is very simple, try searching for \"object tracking\" to see what else you can use (maybe have a look at OpenCV's KLT tracking).","Q_Score":0,"Tags":"python,video,image-processing,opencv,computer-vision","A_Id":14727117,"CreationDate":"2013-02-03T21:55:00.000","Title":"Counting Cars in OpenCV + Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ive got this big\/easy problem that i need to solve but i cant..\nWhat im trying to do is to count cars on a highway, and i actually can detect the moving cars and put bounding boxes on them... but when i try to count them, i simply cant. I tried making a variable (nCars) and increment everytime the program creates a bounding box, but that seems to increment to many times.. \nThe question is: Whats the best way to count moving cars\/objects?\nPS: I Dont know if this is a silly question but im going nutts.... Thanks for everything (:\nAnd im new here but i know this website for some time (: Its great!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1748,"Q_Id":14677763,"Users Score":0,"Answer":"You should use an sqlite database for store cars' informations.","Q_Score":0,"Tags":"python,video,image-processing,opencv,computer-vision","A_Id":14678095,"CreationDate":"2013-02-03T21:55:00.000","Title":"Counting Cars in OpenCV + Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to add some meta-information\/metadata to a pandas DataFrame?\nFor example, the instrument's name used to measure the data, the instrument responsible, etc.\nOne workaround would be to create a column with that information, but it seems wasteful to store a single piece of information in every row!","AnswerCount":13,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":59446,"Q_Id":14688306,"Users Score":14,"Answer":"Just ran into this issue myself. As of pandas 0.13, DataFrames have a _metadata attribute on them that does persist through functions that return new DataFrames. Also seems to survive serialization just fine (I've only tried json, but I imagine hdf is covered as well).","Q_Score":127,"Tags":"python,pandas","A_Id":25715719,"CreationDate":"2013-02-04T13:59:00.000","Title":"Adding meta-information\/metadata to pandas DataFrame","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently working on a NLP project that is trying to differentiate between synonyms (received from Python's NLTK with WordNet) in a context. I've looked into a good deal of NLP concepts trying to find exactly what I want, and the closest thing I've found is n-grams, but its not quite a perfect fit.\nSuppose I am trying to find the proper definition of the verb \"box\". \"Box\" could mean \"fight\" or \"package\"; however, somewhere else in the text, the word \"ring\" or \"fighter\" appears. As I understand it, an n-gram would be \"box fighter\" or \"box ring\", which is rather ridiculous as a phrase, and not likely to appear. But on a concept map, the \"box\" action might be linked with a \"ring\", since they are conceptually related.\nIs n-gram what I want? Is there another name for this? Any help on where to look for retrieving such relational data?\nAll help is appreciated.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1365,"Q_Id":14718543,"Users Score":2,"Answer":"You might want to look into word sense disambiguation (WSD), it is the problem of determining which \"sense\" (meaning) of a word is activated by the use of the word in a particular context, a process which appears to be largely unconscious in people.","Q_Score":3,"Tags":"python,nlp,nltk,n-gram","A_Id":14718635,"CreationDate":"2013-02-05T22:55:00.000","Title":"Natural Language Processing - Similar to ngram","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a beginner in python and easily get stucked and confused...\nWhen I read a file which contains a table with numbers with digits, it reads it as an numpy.ndarray\nPython is changing the display of the numbers. \nFor example: \nIn the input file i have this number: 56143.0254154\nand in the output file the number is written as: 5.61430254e+04\nbut i want to keep the first format in the output file.\ni tried to use the string.format or locale.format functions but it doesn't work\nCan anybody help me to do this?\nThanks!\nRuxy","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":1536,"Q_Id":14733471,"Users Score":2,"Answer":"Try numpy.set_printoptions() -- there you can e.g. specify the number of digits that are printed and suppress the scientific notation. For example, numpy.set_printoptions(precision=8,suppress=True) will print 8 digits and no \"...e+xx\".","Q_Score":2,"Tags":"python,numpy,string-formatting,multidimensional-array","A_Id":14734299,"CreationDate":"2013-02-06T16:06:00.000","Title":"python changes format numbers ndarray many digits","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is probably a very easy question, but all the sources I have found on interpolation in Matlab are trying to correlate two values, all I wanted to benefit from is if I have data which is collected over an 8 hour period, but the time between data points is varying, how do I adjust it such that the time periods are equal and the data remains consistent?\nOr to rephrase from the approach I have been trying: I have GPS lat,lon and Unix time for these points; what I want to do is take the lat and lon at time 1 and time 3 and for the case where I don't know time 2, simply fill it with data from time 1 - is there a functional way to do this? (I know in something like Python Pandas you can use fill) but I'm unsure of how to do this in Matlab.","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":705,"Q_Id":14742893,"Users Score":2,"Answer":"What you can do is use interp1 function. This function will fit in deifferent way the numbers for a new X series.\nFor example if you have \nx=[1 3 5 6 10 12] \ny=[15 20 17 33 56 89]\nThis means if you want to fill in for x1=[1 2 3 4 5 6 7 ... 12], you will type \ny1=interp1(x,y,x1)","Q_Score":2,"Tags":"python,matlab,pandas,gps","A_Id":14754539,"CreationDate":"2013-02-07T03:13:00.000","Title":"Interpolation Function","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of user:friends (50,000) and a list of event attendees (25,000 events and list of attendees for each event). I want to find top k friends with whom the user goes to the event. This needs to be done for each user. \nI tried traversing lists but is computationally very expensive. I am also trying to do it by creating weighted graph.(Python)\nLet me know if there is any other approach.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":177,"Q_Id":14826245,"Users Score":0,"Answer":"I'd give you a code sample if I better understood what your current data structures look like, but this sounds like a job for a pandas dataframe groupby (in case you don't feel like actually using a database as others have suggested).","Q_Score":2,"Tags":"python,data-structures","A_Id":14827656,"CreationDate":"2013-02-12T05:48:00.000","Title":"Search in Large data set","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of user:friends (50,000) and a list of event attendees (25,000 events and list of attendees for each event). I want to find top k friends with whom the user goes to the event. This needs to be done for each user. \nI tried traversing lists but is computationally very expensive. I am also trying to do it by creating weighted graph.(Python)\nLet me know if there is any other approach.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":177,"Q_Id":14826245,"Users Score":0,"Answer":"Can you do something like this.\nIm assuming friends of a user is relatively less, and the events attended by a particular user is also much lesser than total number of events.\nSo have a boolean vector of attended events for each friend of the user.\nDoing a dot product and those that have max will be the friend who most likely resembles the user.\nAgain,.before you do this..you will have to filter some events to keep the size of your vectors manageable.","Q_Score":2,"Tags":"python,data-structures","A_Id":14826472,"CreationDate":"2013-02-12T05:48:00.000","Title":"Search in Large data set","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm solving a classification problem with sklearn's logistic regression in python.\nMy problem is a general\/generic one. I have a dataset with two classes\/result (positive\/negative or 1\/0), but the set is highly unbalanced. There are ~5% positives and ~95% negatives.\nI know there are a number of ways to deal with an unbalanced problem like this, but have not found a good explanation of how to implement properly using the sklearn package.\nWhat I've done thus far is to build a balanced training set by selecting entries with a positive outcome and an equal number of randomly selected negative entries. I can then train the model to this set, but I'm stuck with how to modify the model to then work on the original unbalanced population\/set.\nWhat are the specific steps to do this? I've poured over the sklearn documentation and examples and haven't found a good explanation.","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":18285,"Q_Id":14863125,"Users Score":22,"Answer":"Have you tried to pass to your class_weight=\"auto\" classifier? Not all classifiers in sklearn support this, but some do. Check the docstrings.\nAlso you can rebalance your dataset by randomly dropping negative examples and \/ or over-sampling positive examples (+ potentially adding some slight gaussian feature noise).","Q_Score":22,"Tags":"python,scikit-learn,classification","A_Id":14864547,"CreationDate":"2013-02-13T21:06:00.000","Title":"sklearn logistic regression with unbalanced classes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to change the default icon images of a matplotplib.\neven when i replaced the image with the same name and size from the image location \ni.e. C:\\Python27\\Lib\\site-packages\\matplotlib\\mpl-data\\images\\home.png \nits still plotting the the graphs with the same default images.\nIf I need to change the code of the image location in any file kindly direct me to the code and the code segment.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":888,"Q_Id":14869145,"Users Score":1,"Answer":"I suspect that exactly what you will have to do will depend on you gui toolkit. The code that you want to look at is in matplotlib\/lib\/matplotlib\/backends and you want to find the class that sub-classes NavigationToolbar2 in which ever backend you are using.","Q_Score":1,"Tags":"python,matplotlib","A_Id":14877368,"CreationDate":"2013-02-14T06:40:00.000","Title":"customize the default toolbar icon images of a matplotlib graph","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was thinking of doing a little project that involves recognizing simple two-dimensional objects using some kind of machine learning. I think it's better that I have each network devoted to recognizing only one type of object. So here are my two questions:\n\nWhat kind of network should I use? The two I can think of that could work are simple feed-forward networks and Hopfield networks. Since I also want to know how much the input looks like the target, Hopfield nets are probably not suitable.\nIf I use something that requires supervised learning and I only want one output unit that indicates how much the input looks like the target, what counter-examples should I show it during the training process? Just giving it positive examples I'm pretty sure won't work (the network will just learn to always say 'yes').\n\nThe images are going to be low resolution and black and white.","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2529,"Q_Id":14875450,"Users Score":4,"Answer":"First, a note regarding the classification method to use. \nIf you intend to use the image pixels themselves as features, neural network might be a fitting classification method. In that case, I think it might be a better idea to train the same network to distinguish between the various objects, rather than using a separate network for each, because it would allow the network to focus on the most discriminative features. \nHowever, if you intend to extract synthetic features from the image and base the classification on them, I would suggest considering other classification methods, e.g. SVM.\nThe reason is that neural networks generally have many parameters to set (e.g. network size and architecture), making the process of building a classifier longer and more complicated.\nSpecifically regarding your NN-related questions, I would suggest using a feedforward network, which is relatively easy to build and train, with a softmax output layer, which allows assigning probabilities to the various classes. \nIn case you're using a single network for classification, the question regarding negative examples is irrelevant; for each class, other classes would be its negative examples. If you decide to use different networks, you can use the same counter-examples (i.e. other classes), but as a rule of thumb, I'd suggest showing no more than 2-10 negative examples per positive example.\nEDIT: \nbased on the comments below, it seems the problem is to decide how fitting is a given image (drawing) to a given concept, e.g. how similar to a tree is the the user-supplied tree drawing. \nIn this case, I'd suggest a radically different approach: extract visual features from each drawing, and perform knn classification, based on all past user-supplied drawings and their classifications (possibly, plus a predefined set generated by you). You can score the similarity either by the nominal distance to same-class examples, or by the class distribution of the closest matches. \nI know that this is not neccessarily what you're asking, but this seems to me an easier and more direct approach, especially given the fact that the number of examples and classes is expected to constantly grow.","Q_Score":0,"Tags":"python,language-agnostic,machine-learning,object-recognition,pybrain","A_Id":14877671,"CreationDate":"2013-02-14T13:01:00.000","Title":"How to Train Single-Object Recognition?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a text file with a bunch of number that contains newlines every 32 entries. I want to read this file a a column vector using Numpy. How can I use numpy.loadtxt and ignore the newlines such that the generated array is of size 1024x1 and not 32x32?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":174,"Q_Id":14884214,"Users Score":1,"Answer":"Just use loadtxt and reshape (or ravel) the resulting array.","Q_Score":0,"Tags":"python,numpy","A_Id":14884277,"CreationDate":"2013-02-14T21:18:00.000","Title":"Read Vector from Text File","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a matrix of the form, say e^(Ax) where A is a square matrix. How can I integrate it from a given value a to another value bso that the output is a corresponding array?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1283,"Q_Id":14899139,"Users Score":2,"Answer":"Provided A has the right properties, you could transform it to the diagonal form A0 by calculating its eigenvectors and eigenvalues. In the diagonal form, the solution is sol = [exp(A0*b) - exp(A0*a)] * inv(A0), where A0 is the diagonal matrix with the eigenvalues and inv(A0) just contains the inverse of the eigenvalues in its diagonal. Finally, you transform back the solution by multiplying it with the transpose of the eigenvalues from the left and the eigenvalues from the right: transpose(eigvecs) * sol * eigvecs.","Q_Score":1,"Tags":"python,matrix,numpy,integration,exponential","A_Id":14900251,"CreationDate":"2013-02-15T16:35:00.000","Title":"how to find the integral of a matrix exponential in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"According to the Enthought website, the EPD Python distribution uses MKL for numpy and scipy. Does EPD Free also use MKL? If not does it use another library for BLAS\/LAPACK? I am using EPD Free 7.3-2\nAlso, what library does the windows binary installer for numpy that can be found on scipy.org use?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":538,"Q_Id":14946512,"Users Score":2,"Answer":"The EPD Free 7.3 installers do not include MKL. The BLAS\/LAPACK libraries which they use are ATLAS on Linux & Windows and Accelerate on OSX.","Q_Score":1,"Tags":"lapack,blas,enthought,intel-mkl,epd-python","A_Id":14966265,"CreationDate":"2013-02-18T22:31:00.000","Title":"Does the EPD Free distribution use MKL?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to fit a straight line to my data to find out if there is a gradient.\nI am currently doing this with scipy.stats.linregress.\nI'm a little confused though, because one of the outputs of linregress is the \"standard error\", but I'm not sure how linregress calculated this, as the uncertainty of your data points is not given as an input.\nSurely the uncertainty on the data points influence how uncertain the given gradient is?\nThank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":340,"Q_Id":14998497,"Users Score":1,"Answer":"The standard error of a linear regression is the standard deviation of the serie obtained by substracting the fitted model to your data points. It indicates how well your data points can be fitted by a linear model.","Q_Score":0,"Tags":"python,scipy","A_Id":14999516,"CreationDate":"2013-02-21T09:20:00.000","Title":"error in Python gradient measurement","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to build a simple evolution simulation of agents controlled by neural network. In the current version each agent has feed-forward neural net with one hidden layer. The environment contains fixed amount of food represented as a red dot. When an agent moves, he loses energy, and when he is near the food, he gains energy. Agent with 0 energy dies. the input of the neural net is the current angle of the agent and a vector to the closest food. Every time step, the angle of movement of each agent is changed by the output of its neural net. The aim of course is to see food-seeking behavior evolves after some time. However, nothing happens. \nI don't know if the problem is the structure the neural net (too simple?) or the reproduction mechanism: to prevent population explosion, the initial population is about 20 agents, and as the population becomes close to 50, the reproduction chance approaches zero. When reproduction does occur, the parent is chosen by going over the list of agents from beginning to end, and checking for each agent whether or not a random number between 0 to 1 is less than the ratio between this agent's energy and the sum of the energy of all agents. If so, the searching is over and this agent becomes a parent, as we add to the environment a copy of this agent with some probability of mutations in one or more of the weights in his neural network.\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2480,"Q_Id":15008875,"Users Score":6,"Answer":"If the environment is benign enough (e.g it's easy enough to find food) then just moving randomly may be a perfectly viable strategy and reproductive success may be far more influenced by luck than anything else. Also consider unintended consequences: e.g if offspring is co-sited with its parent then both are immediately in competition with each other in the local area and this might be sufficiently disadvantageous to lead to the death of both in the longer term.\nTo test your system, introduce an individual with a \"premade\" neural network set up to steer the individual directly towards the nearest food (your model is such that such a thing exists and is reasobably easy to write down, right? If not, it's unreasonable to expect it to evolve!). Introduce that individual into your simulation amongst the dumb masses. If the individual doesn't quickly dominate, it suggests your simulation isn't set up to reinforce such behaviour. But if the individual enjoys reproductive success and it and its descendants take over, then your simulation is doing something right and you need to look elsewhere for the reason such behaviour isn't evolving.\nUpdate in response to comment:\nSeems to me this mixing of angles and vectors is dubious. Whether individuals can evolve towards the \"move straight towards nearest food\" behaviour must rather depend on how well an atan function can be approximated by your network (I'm sceptical). Again, this suggests more testing: \n\nset aside all the ecological simulation and just test perturbing a population\nof your style of random networks to see if they can evolve towards the expected function.\n(simpler, better) Have the network output a vector (instead of an angle): the direction the individual should move in (of course this means having 2 output nodes instead of one). Obviously the \"move straight towards food\" strategy is then just a straight pass-through of the \"direction towards food\" vector components, and the interesting thing is then to see whether your random networks evolve towards this simple \"identity function\" (also should allow introduction of a readymade optimised individual as described above).\n\nI'm dubious about the \"fixed amount of food\" too. (I assume you mean as soon as a red dot is consumed, another one is introduced). A more \"realistic\" model might be to introduce food at a constant rate, and not impose any artificial population limits: population limits are determined by the limitations of food supply. e.g If you introduce 100 units of food a minute and individuals need 1 unit of food per minute to survive, then your simulation should find it tends towards a long term average population of 100 individuals without any need for a clamp to avoid a \"population explosion\" (although boom-and-bust, feast-or-famine dynamics may actually emerge depending on the details).","Q_Score":8,"Tags":"python,artificial-intelligence,neural-network,artificial-life","A_Id":15011126,"CreationDate":"2013-02-21T17:44:00.000","Title":"Artificial life with neural networks","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there anyway to attach a descriptive version to an Index Column?\nFor Example, I use ISO3 CountryCode's to merge from different data sources\n'AUS' -> Australia etc. This is very convenient for merging different data sources, but when I want to print the data I would like the description version (i.e. Australia). I am imagining a dictionary attached to the Index Column of 'CountryCode' (where CountryCode is Key and CountryName is Value) and a flag that will print the Value instead of the Key which is used for data manipulation. \nIs the best solution to generate my own Dictionary() and then when it comes time to print or graph to then merge the country names in? This is ok, except it would be nice for ALL of the dataset information to be carried within the dataframe object.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":102,"Q_Id":15016187,"Users Score":1,"Answer":"I think the simplest solution split this into two columns in your DataFrame, one for country_code and country_name (you could name them something else).\nWhen you print or graph you can select which column is used.","Q_Score":1,"Tags":"python,pandas","A_Id":15016437,"CreationDate":"2013-02-22T03:09:00.000","Title":"Pandas: Attaching Descriptive Dict() to Hierarchical Index (i.e. CountryCode and CountryName)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know generally speaking FFT and multiplication is usually faster than direct convolve operation, when the array is relatively large. However, I'm convolving a very long signal (say 10 million points) with a very short response (say 1 thousand points). In this case the fftconvolve doesn't seem to make much sense, since it forces a FFT of the second array to the same size of the first array. Is it faster to just do direct convolve in this case?","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":10751,"Q_Id":15018526,"Users Score":3,"Answer":"FFT fast convolution via the overlap-add or overlap save algorithms can be done in limited memory by using an FFT that is only a small multiple (such as 2X) larger than the impulse response. It breaks the long FFT up into properly overlapped shorter but zero-padded FFTs.\nEven with the overlap overhead, O(NlogN) will beat M*N in efficiency for large enough N and M.","Q_Score":11,"Tags":"python,scipy,fft,convolution","A_Id":15020070,"CreationDate":"2013-02-22T06:54:00.000","Title":"Python SciPy convolve vs fftconvolve","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use the car evaluation dataset from the UCI repository and I wonder whether there is a convenient way to binarize categorical variables in sklearn. One approach would be to use the DictVectorizer of LabelBinarizer but here I'm getting k different features whereas you should have just k-1 in order to avoid collinearization. \n I guess I could write my own function and drop one column but this bookkeeping is tedious, is there an easy way to perform such transformations and get as a result a sparse matrix?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":22439,"Q_Id":15021521,"Users Score":15,"Answer":"DictVectorizer is the recommended way to generate a one-hot encoding of categorical variables; you can use the sparse argument to create a sparse CSR matrix instead of a dense numpy array. I usually don't care about multicollinearity and I haven't noticed a problem with the approaches that I tend to use (i.e. LinearSVC, SGDClassifier, Tree-based methods).\nIt shouldn't be a problem to patch the DictVectorizer to drop one column per categorical feature - you simple need to remove one term from DictVectorizer.vocabulary at the end of the fit method. (Pull requests are always welcome!)","Q_Score":12,"Tags":"python,machine-learning,scikit-learn","A_Id":15038477,"CreationDate":"2013-02-22T10:05:00.000","Title":"How to encode a categorical variable in sklearn?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a 10 by 10 game board. It is a 2D list, with another list of 2 inside. I used \nboard = [[['O', 'O']] * 10 for x in range(1, 11)]. So it will produce something like \n['O', 'O'] ['O', 'O']...\n['O', 'O'] ['O', 'O']...\nLater on I want to set a single cell to have 'C' I use board.gameBoard[animal.y][animal.x][0] = 'C'\nboard being the class the gameBoard is in, and animal is a game piece, x & y are just ints. Some times it will work and the specified cell will become ['C', 'O'], other times it will fill the entire row with ['C', 'O']['C', 'O']['C', 'O']['C', 'O']\nDoes anyone know why that might be happening?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":168,"Q_Id":15036694,"Users Score":0,"Answer":"Your board is getting multiple references to the same array. \n You need to replace the * 10 with another list comprehension.","Q_Score":1,"Tags":"python,list","A_Id":15036718,"CreationDate":"2013-02-23T03:25:00.000","Title":"2D Python list will have random results","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to compute the cosine similarity between strings in a list. For example, I have a list of over 10 million strings, each string has to determine similarity between itself and every other string in the list. What is the best algorithm I can use to efficiently and quickly do such task? Is the divide and conquer algorithm applicable? \nEDIT\nI want to determine which strings are most similar to a given string and be able to have a measure\/score associated with the similarity. I think what I want to do falls in line with clustering where the number of clusters are not initially known.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2043,"Q_Id":15041647,"Users Score":0,"Answer":"Work with the transposed matrix. That is what Mahout does on Hadoop to do this kind of task fast (or just use Mahout).\nEssentially, computing cosine similarity the naive way is bad. Because you end up computing a lot of 0 * something. Instead, you better work in columns, and leave away all 0s there.","Q_Score":8,"Tags":"java,python,algorithm,divide-and-conquer,cosine-similarity","A_Id":15042390,"CreationDate":"2013-02-23T14:34:00.000","Title":"How to efficiently compute the cosine similarity between millions of strings","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to represent a number as the product of its factors.The number of factors that are used to represent the number should be from 2 to number of prime factors of the same number(this i s the maximum possible number of factors for a number).\nfor example taking the number 24:\nrepresentation of the number as two factors multiplication are 2*12, 8*3, 6*4 and so on...,\nrepresentation of the number as three factors multiplication are 2*2*6, 2*3*4 and so on...,\nrepresentation of the number as four factors multiplication(prime factors alone) are 2*2*2*3.\nplease help me get some simple and generic algorithm for this","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1491,"Q_Id":15068698,"Users Score":0,"Answer":"I know one...\nIf you're using python, you can use dictionary's to simplify the storage...\nYou'll have to check for every prime less than square root of the number.\nNow, suppose p^k divides your number n, your task, I suppose is to find k.\nHere's the method:\n\nint c = 0; int temp = n; while(temp!=0) { temp \/= p; c+= temp; }\n\nThe above is a C++ code but you'll get the idea...\nAt the end of this loop you'll have c = k\nAnd yeah, the link given by will is a perfect python implementation of the same algorithm","Q_Score":1,"Tags":"python","A_Id":15068825,"CreationDate":"2013-02-25T13:59:00.000","Title":"representation of a number as multiplication of its factors","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know about these column slice methods:\ndf2 = df[[\"col1\", \"col2\", \"col3\"]] and df2 = df.ix[:,0:2]\nbut I'm wondering if there is a way to slice columns from the front\/middle\/end of a dataframe in the same slice without specifically listing each one.\nFor example, a dataframe df with columns: col1, col2, col3, col4, col5 and col6.\nIs there a way to do something like this?\ndf2 = df.ix[:, [0:2, \"col5\"]]\nI'm in the situation where I have hundreds of columns and routinely need to slice specific ones for different requests. I've checked through the documentation and haven't seen something like this. Have I overlooked something?","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":28966,"Q_Id":15072005,"Users Score":5,"Answer":"If your column names have information that you can filter for, you could use df.filter(regex='name*').\nI am using this to filter between my 189 data channels from a1_01 to b3_21 and it works fine.","Q_Score":17,"Tags":"python,pandas","A_Id":15107442,"CreationDate":"2013-02-25T16:48:00.000","Title":"keep\/slice specific columns in pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was asked in an interview to come up with a solution with linear time for cartesian product. I did the iterative manner O(mn) and a recursive solution also which is also O(mn). But I could not reduce the complexity further. Does anyone have ideas on how this complexity can be improved? Also can anyone suggest an efficient recursive approach?","AnswerCount":2,"Available Count":2,"Score":0.3799489623,"is_accepted":false,"ViewCount":1149,"Q_Id":15079069,"Users Score":4,"Answer":"There are mn results; the minimum work you have to do is write each result to the output. So you cannot do better than O(mn).","Q_Score":2,"Tags":"python,algorithm,cartesian-product","A_Id":15079557,"CreationDate":"2013-02-26T00:15:00.000","Title":"Linear time algorithm to compute cartesian product","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was asked in an interview to come up with a solution with linear time for cartesian product. I did the iterative manner O(mn) and a recursive solution also which is also O(mn). But I could not reduce the complexity further. Does anyone have ideas on how this complexity can be improved? Also can anyone suggest an efficient recursive approach?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1149,"Q_Id":15079069,"Users Score":0,"Answer":"The question that comes to my mind reading this is, \"Linear with respect to what?\" Remember that in mathematics, all variables must be defined to have meaning. Big-O notation is no exception. Simply saying an algorithm is O(n) is meaningless if n is not defined.\nAssuming the question was meaningful, and not a mistake, my guess is that they wanted you to ask for clarification. Another possibility is that they wanted to see how you would respond when presented with an impossible situation.","Q_Score":2,"Tags":"python,algorithm,cartesian-product","A_Id":20166514,"CreationDate":"2013-02-26T00:15:00.000","Title":"Linear time algorithm to compute cartesian product","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am plotting a contourmap. When first plotting I noticed I had my axes wrong. So I switched the axes and noticed that the structure of both plots is different. On the first plot the axes and assignments are correct, but the structure is messy. On the second plot it is the other way around.\nSince it's a square matrix I don't see why there should be a sampling issue.\nTransposing the matrix with z-values or the meshgrid of x and y does not help either. Whatever way I plot x and y correctly it keeps looking messy.\nDoes anybody here know any more ideas which I can try or what might solve it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":15087303,"Users Score":0,"Answer":"The problem was the sampling. Although the arrays have the same size, the stepsize in the plot is not equal for x and y axis.","Q_Score":1,"Tags":"python,matplotlib,axes","A_Id":15090586,"CreationDate":"2013-02-26T10:56:00.000","Title":"contourf result differs when switching axes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I build a class with some iteration over coming data. The data are in an array form without use of numpy objects. On my code I often use .append to create another array. At some point I changed one of the big array 1000x2000 to numpy.array. Now I have an error after error. I started to convert all of the arrays into ndarray but comments like .append does not work any more. I start to have a problems with pointing to rows, columns or cells. and have to rebuild all code.\nI try to google an answer to the question: \"what is and advantage of using ndarray over normal array\" I can't find a sensible answer. Can you write when should I start to use ndarrays and if in your practice do you use both of them or stick to one only.\nSorry if the question is a novice level, but I am new to python, just try to move from Matlab and want to understand what are pros and cons. Thanks","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":1532,"Q_Id":15111230,"Users Score":7,"Answer":"There are at least two main reasons for using NumPy arrays:\n\nNumPy arrays require less space than Python lists. So you can deal with more data in a NumPy array (in-memory) than you can with Python lists.\nNumPy arrays have a vast library of functions and methods unavailable\nto Python lists or Python arrays.\n\nYes, you can not simply convert lists to NumPy arrays and expect your code to continue to work. The methods are different, the bool semantics are different. For the best performance, even the algorithm may need to change.\nHowever, if you are looking for a Python replacement for Matlab, you will definitely find uses for NumPy. It is worth learning.","Q_Score":5,"Tags":"python,numpy,multidimensional-array","A_Id":15111407,"CreationDate":"2013-02-27T11:42:00.000","Title":"what is a reason to use ndarray instead of python array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I build a class with some iteration over coming data. The data are in an array form without use of numpy objects. On my code I often use .append to create another array. At some point I changed one of the big array 1000x2000 to numpy.array. Now I have an error after error. I started to convert all of the arrays into ndarray but comments like .append does not work any more. I start to have a problems with pointing to rows, columns or cells. and have to rebuild all code.\nI try to google an answer to the question: \"what is and advantage of using ndarray over normal array\" I can't find a sensible answer. Can you write when should I start to use ndarrays and if in your practice do you use both of them or stick to one only.\nSorry if the question is a novice level, but I am new to python, just try to move from Matlab and want to understand what are pros and cons. Thanks","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1532,"Q_Id":15111230,"Users Score":8,"Answer":"NumPy and Python arrays share the property of being efficiently stored in memory.\nNumPy arrays can be added together, multiplied by a number, you can calculate, say, the sine of all their values in one function call, etc. As HYRY pointed out, they can also have more than one dimension. You cannot do this with Python arrays.\nOn the other hand, Python arrays can indeed be appended to. Note that NumPy arrays can however be concatenated together (hstack(), vstack(),\u2026). That said, NumPy arrays are mostly meant to have a fixed number of elements.\nIt is common to first build a list (or a Python array) of values iteratively and then convert it to a NumPy array (with numpy.array(), or, more efficiently, with numpy.frombuffer(), as HYRY mentioned): this allows mathematical operations on arrays (or matrices) to be performed very conveniently (simple syntax for complex operations). Alternatively, numpy.fromiter() might be used to construct the array from an iterator. Or loadtxt() to construct it from a text file.","Q_Score":5,"Tags":"python,numpy,multidimensional-array","A_Id":15111278,"CreationDate":"2013-02-27T11:42:00.000","Title":"what is a reason to use ndarray instead of python array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I build a class with some iteration over coming data. The data are in an array form without use of numpy objects. On my code I often use .append to create another array. At some point I changed one of the big array 1000x2000 to numpy.array. Now I have an error after error. I started to convert all of the arrays into ndarray but comments like .append does not work any more. I start to have a problems with pointing to rows, columns or cells. and have to rebuild all code.\nI try to google an answer to the question: \"what is and advantage of using ndarray over normal array\" I can't find a sensible answer. Can you write when should I start to use ndarrays and if in your practice do you use both of them or stick to one only.\nSorry if the question is a novice level, but I am new to python, just try to move from Matlab and want to understand what are pros and cons. Thanks","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1532,"Q_Id":15111230,"Users Score":1,"Answer":"Another great advantage of using NumPy arrays over built-in lists is the fact that NumPy has a C API that allows native C and C++ code to access NumPy arrays directly. Hence, many Python libraries written in low-level languages like C are expecting you to work with NumPy arrays instead of Python lists.\nReference: Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython","Q_Score":5,"Tags":"python,numpy,multidimensional-array","A_Id":53073528,"CreationDate":"2013-02-27T11:42:00.000","Title":"what is a reason to use ndarray instead of python array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any documentation on the interdependencies and relationship between packages in the the scipy, numpy, pandas, scikit ecosystem?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":198,"Q_Id":15143253,"Users Score":6,"Answer":"AFAIK, here is the dependency tree (numpy is a dependency of everything):\n\nnumpy\n\nscipy\n\nscikit-learn\n\npandas","Q_Score":3,"Tags":"python,numpy,scipy,pandas,scikit-learn","A_Id":15143804,"CreationDate":"2013-02-28T18:45:00.000","Title":"Is there any documentation on the interdependencies between packages in the scipy, numpy, pandas, scikit ecosystem? Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to train svm.sparse.SVC in scikit-learn. Right now the dimension of the feature vectors is around 0.7 million and the number of feature vectors being used for training is 20k. I am providing input using csr sparse matrices as only around 500 dimensions are non-zero in each feature vector. The code is running since the past 5 hours. Is there any estimate on how much time it will take? Is there any way to do the training faster? Kernel is linear.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1802,"Q_Id":15154690,"Users Score":3,"Answer":"Try using sklearn.svm.LinearSVC. This also has a linear kernel, but the underlying implementation is liblinear, which is known to be faster. With that in mind, your data set isn't very small, so even this classifier might take a while.\n\nEdit after first comment:\nIn that I think you have several options, neither of which is perfect: \n\nThe non-solution option: call it a day and hope that training of svm.sparse.SVC has finished tomorrow morning. If you can, buy a better computer.\nThe cheat option: give up on probabilities. You haven't told us what your problem is, so they may not be essential.\nThe back-against-the-wall option: if you absolutely need probabilities and things must run faster, use a different classifier. Options include sklearn.naive_bayes.*, sklearn.linear_model.LogisticRegression. etc. These will be much faster to train, but the price you pay is somewhat reduced accuracy.","Q_Score":2,"Tags":"python,svm,scikit-learn","A_Id":15155284,"CreationDate":"2013-03-01T09:44:00.000","Title":"svm.sparse.SVC taking a lot of time to get trained","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with OpenCV and python and would like to obtain the real world translation between two cameras. I'm using a single calibrated camera which is moving. I've already worked through feature matching, calculation of F via RANSAC, and calculation of E. To get the translation between cameras, I think I can use: w, u, vt = cv2.SVDecomp and then my t vector could be: t = u[:,2] An example output is:\n[[ -1.16399893 9.78967574 1.40910252]\n [ -7.79802049 -0.26646268 -13.85252956]\n [ -2.67690676 13.89538682 0.19209676]]\nt vector: [ 0.81586158 0.0750399 -0.57335756]\nI think I understand how the translation is not in real world scale so I need to provide that scale somehow if I want a real world translation. If I do know the distance between the cameras, can I just apply it directly to my t vector by multiplication? I think I'm missing something here...","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":726,"Q_Id":15157756,"Users Score":-1,"Answer":"I have the same problem.I think the monocular camera may need a object known the 3D coordinate.That may help .","Q_Score":1,"Tags":"python,opencv,image-processing","A_Id":29629579,"CreationDate":"2013-03-01T12:25:00.000","Title":"Where do I add a scale factor to the Essential Matrix to produce a real world translation value","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to make comparison between 155 image feature vectors. Every feature vector has got 5 features.\nMy image are divided in 10 classes.\nUnfortunately i need at least 100 images for class for using support vector machine , There is any alternative?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1430,"Q_Id":15177490,"Users Score":0,"Answer":"If your images that belong to the same class are results of a transformations to some starting image you can increase your training size by making transofrmations to your labeled examples. \nFor example if you are doing character recognition, afine or elastic transforamtions can be used. P.Simard in Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis describes it in more detail. In the paper he uses Neural Networks but the same applies for SVM.","Q_Score":1,"Tags":"python,opencv,machine-learning,scikit-learn,classification","A_Id":15181340,"CreationDate":"2013-03-02T17:45:00.000","Title":"Alternative to support vector machine classifier in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any library available to have inverse of a function? To be more specific, given a function y=f(x) and domain, is there any library which can output x=f(y)? Sadly I cannot use matlab\/mathematics in my application, looking for C\/Python library..","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":36620,"Q_Id":15200560,"Users Score":8,"Answer":"As has already been mentioned, not all functions are invertible. In some cases imposing additional constraints helps: think about the inverse of sin(x).\nOnce you are sure your function has a unique inverse, solve the equation f(x) = y. The solution gives you the inverse, y(x).\nIn python, look for nonlinear solvers from scipy.optimize.","Q_Score":11,"Tags":"python,c,math","A_Id":15200968,"CreationDate":"2013-03-04T11:32:00.000","Title":"Calculate inverse of a function--Library","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing the sentiment analysis for the Arabic language , I want to creat my own corpus , to do that , I collect 300 status from facebook and I classify them into positive and negative , now I want to do the tokenization of these status , in order to obain a list of words , and hen generate unigrams and bigrams, trigrams and use the cross fold validation , I'm using for the moment the nltk python, is this software able to do this task fr the arabic language or the rapis Minner will be better to work with , what do you think and I'm wondering how to generate the bigrams, trigrams and use the cross fold validation , is there any idea ??","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1057,"Q_Id":15282336,"Users Score":0,"Answer":"Well, I think that rapidminer is very interesting and can handle this task. It contains several operators dealing with text mining. Also, it allows the creation of new operators with high fluency.","Q_Score":2,"Tags":"python,nlp,nltk,sentiment-analysis,rapidminer","A_Id":15319789,"CreationDate":"2013-03-07T21:42:00.000","Title":"creating arabic corpus","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a function that takes in two lists, the lists are not guaranteed to be of equal length, and returns all the interleavings between the two lists.\nInput: Two lists that do not have to be equal in size.\nOutput: All possible interleavings between the two lists that preserve the original list's order.\nExample: AllInter([1,2],[3,4]) -> [[1,2,3,4], [1,3,2,4], [1,3,4,2], [3,1,2,4], [3,1,4,2], [3,4,1,2]]\nI do not want a solution. I want a hint.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1387,"Q_Id":15306231,"Users Score":1,"Answer":"As suggested by @airza, the itertools module is your friend.\nIf you want to avoid using encapsulated magical goodness, my hint is to use recursion.\nStart playing the process of generating the lists in your mind, and when you notice you're doing the same thing again, try to find the pattern. For example:\n\nTake the first element from the first list\nEither take the 2nd, or the first from the other list\nEither take the 3rd, or the 2nd if you didn't, or another one from the other list\n...\n\nOkay, that is starting to look like there's some greater logic we're not using. I'm just incrementing the numbers. Surely I can find a base case that works while changing the \"first element, instead of naming higher elements?\nPlay with it. :)","Q_Score":5,"Tags":"python,algorithm","A_Id":15306292,"CreationDate":"2013-03-09T01:45:00.000","Title":"How to calculate all interleavings of two lists?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a function that takes in two lists, the lists are not guaranteed to be of equal length, and returns all the interleavings between the two lists.\nInput: Two lists that do not have to be equal in size.\nOutput: All possible interleavings between the two lists that preserve the original list's order.\nExample: AllInter([1,2],[3,4]) -> [[1,2,3,4], [1,3,2,4], [1,3,4,2], [3,1,2,4], [3,1,4,2], [3,4,1,2]]\nI do not want a solution. I want a hint.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1387,"Q_Id":15306231,"Users Score":0,"Answer":"You can try something a little closer to the metal and more elegant(in my opinion) iterating through different possible slices. Basically step through and iterate through all three arguments to the standard slice operation, removing anything added to the final list. Can post code snippet if you're interested.","Q_Score":5,"Tags":"python,algorithm","A_Id":15307850,"CreationDate":"2013-03-09T01:45:00.000","Title":"How to calculate all interleavings of two lists?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have algorithms (easily searchable on the net) for prime factorization and divisor acquisition but I don't know how to scale it to finding those divisors within a range. For example all divisors of 100 between 23 and 49 (arbitrary). But also something efficient so I can scale this to big numbers in larger ranges. At first I was thinking of using an array that's the size of the range and then use all the primes <= the upper bound to sieve all the elements in that array to return an eventual list of divisors, but for large ranges this would be too memory intensive.\nIs there a simple way to just directly generate the divisors?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1052,"Q_Id":15329256,"Users Score":0,"Answer":"As Malvolio was (indirectly) going about, I personal wouldn't find a use for prime factorization if you want to find factors in a range, I would start at int t = (int)(sqrt(n)) and then decremnt until1. t is a factor2. Complete util t or t\/n range has been REACHED(a flag) and then (both) has left the range\nOr if your range is relatively small, check versus those values themselves.","Q_Score":1,"Tags":"c++,python,algorithm,primes,prime-factoring","A_Id":15329656,"CreationDate":"2013-03-11T00:01:00.000","Title":"How can I efficiently get all divisors of X within a range if I have X's prime factorization?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a NumPy array that is of size (3, 3). When I print shape of the array within __main__ module I get (3, 3). However I am passing this array to a function and when I print its size in the function I get (3, ).\nWhy does this happen?\nAlso, what does it mean for a tuple to have its last element unspecified? That is, shouldn't (3, ) be an invalid tuple in the first place?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1032,"Q_Id":15330521,"Users Score":2,"Answer":"To answer your second question:\nTuples in Python are n-dimensional. That is you can have a 1-2-3-...-n tuple. Due to syntax, the way you represent a 1-dimensional tuple is ('element',) where the trailing comma is mandatory. If you have ('element') then this is just simply the expression inside the parenthesis. So (3) + 4 == 7, but (3,) + 4 == TypeError. Likewise ('element') == 'element'.\nTo answer your first question:\nYou're more than likely doing something wrong with passing the array around. There is no reason for the NumPy array to misrepresent itself without some type of mutation to the array.","Q_Score":1,"Tags":"python,numpy,scipy","A_Id":15330625,"CreationDate":"2013-03-11T03:01:00.000","Title":"NumPy array size issue","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a NumPy array that is of size (3, 3). When I print shape of the array within __main__ module I get (3, 3). However I am passing this array to a function and when I print its size in the function I get (3, ).\nWhy does this happen?\nAlso, what does it mean for a tuple to have its last element unspecified? That is, shouldn't (3, ) be an invalid tuple in the first place?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":1032,"Q_Id":15330521,"Users Score":2,"Answer":"A tuple like this: (3, ) means that it's a tuple with a single element (a single dimension, in this case). That's the correct syntax - with a trailing , because if it looked like this: (3) then Python would interpret it as a number surrounded by parenthesis, not a tuple.\nIt'd be useful to see the actual code, but I'm guessing that you're not passing the entire array, only a row (or a column) of it.","Q_Score":1,"Tags":"python,numpy,scipy","A_Id":15330600,"CreationDate":"2013-03-11T03:01:00.000","Title":"NumPy array size issue","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to search about 100 words in blocks of data (20000 blocks approximately) and each block consists of about 20 words. The blocks should be returned in the decreasing order of the number of matches. The brute force technique is very cumbersome because you have to search for all the 100 words one by one and then combine the number of related searches in a complicated manner. Is there any other algorithm which allows to search multiple words at the same time and store the number of matching words?\nThank you","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":15330568,"Users Score":0,"Answer":"Why no consider using multithreading to store the result? Make an array with size equal to number of blocks, then each thread count for the result in one block, then the thread writes the result to the corresponding entry in the array. Later on you sort the array by decreasing order then you get the result.","Q_Score":0,"Tags":"c++,python","A_Id":15330640,"CreationDate":"2013-03-11T03:08:00.000","Title":"Searching multiple words in many blocks of data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program.\nWhat I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs. \nI found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB. \nIs there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program?\nThank you for the help. I appreciate any guidance I can get.\nEDIT:\nThere are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":7265,"Q_Id":15369985,"Users Score":0,"Answer":"If you don't mind installing additional packages (for both python and c++), you can use [BSON][1] (Binary JSON).","Q_Score":6,"Tags":"python,numpy,scipy","A_Id":15370151,"CreationDate":"2013-03-12T19:07:00.000","Title":"python - saving numpy array to a file (smallest size possible)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program.\nWhat I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs. \nI found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB. \nIs there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program?\nThank you for the help. I appreciate any guidance I can get.\nEDIT:\nThere are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":7265,"Q_Id":15369985,"Users Score":1,"Answer":"numpy.ndarray.tofile and numpy.fromfile are useful for direct binary output\/input from python. std::ostream::write std::istream::read are useful for binary output\/input in c++.\nYou should be careful about endianess if the data are transferred from one machine to another.","Q_Score":6,"Tags":"python,numpy,scipy","A_Id":15370191,"CreationDate":"2013-03-12T19:07:00.000","Title":"python - saving numpy array to a file (smallest size possible)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program.\nWhat I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs. \nI found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB. \nIs there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program?\nThank you for the help. I appreciate any guidance I can get.\nEDIT:\nThere are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":7265,"Q_Id":15369985,"Users Score":1,"Answer":"Use the an hdf5 file, they are really simple to use through h5py and you can use set compression a flag. Note that hdf5 has also a c++ interface.","Q_Score":6,"Tags":"python,numpy,scipy","A_Id":19226920,"CreationDate":"2013-03-12T19:07:00.000","Title":"python - saving numpy array to a file (smallest size possible)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using a rather large dataset of ~37 million data points that are hierarchically indexed into three categories country, productcode, year. The country variable (which is the countryname) is rather messy data consisting of items such as: 'Austral' which represents 'Australia'. I have built a simple guess_country() that matches letters to words, and returns a best guess and confidence interval from a known list of country_names. Given the length of the data and the nature of hierarchy it is very inefficient to use .map() to the Series: country. [The guess_country function takes ~2ms \/ request]\nMy question is: Is there a more efficient .map() which takes the Series and performs map on only unique values? (Given there are a LOT of repeated countrynames)","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2544,"Q_Id":15425492,"Users Score":3,"Answer":"There isn't, but if you want to only apply to unique values, just do that yourself. Get mySeries.unique(), then use your function to pre-calculate the mapped alternatives for those unique values and create a dictionary with the resulting mappings. Then use pandas map with the dictionary. This should be about as fast as you can expect.","Q_Score":0,"Tags":"python,pandas","A_Id":15425560,"CreationDate":"2013-03-15T05:37:00.000","Title":"Pandas: More Efficient .map() function or method?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to choose the best parameters for the hysteresis phase in the canny function of OpenCV. I found some similar questions in stackoverflow but they didn't solve my problem. So far I've found that there are two main approaches:\n\nCompute mean and standard deviation and set the thresholds as: lowT = mean - std, highT = mean+std\nCompute the median and set the thresholds as: 0.6*median, 1.33*median\n\nHowever, any of these thresholds is the best fit for my data. Manually, I've found that lowT=100, highT=150 are the best values. The data (gray-scale image) has the following properties:\nmedian=202.0, mean=206.6283375, standard deviation = 35.7482520742\nDoes anyvbody know where is the problem? or knows where can I found more information about this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2035,"Q_Id":15463191,"Users Score":1,"Answer":"Such image statistics as mean, std etc. are not sufficient to answer the question, and canny may not be the best approach; it all depends on characteristics of the image. To learn about those characteristics and approaches, you may google for a survey of image segmentation \/ edge detection methods. And this kind of problems often involve some pre-processing and post-processing steps.","Q_Score":2,"Tags":"python,opencv,image-processing,computer-vision","A_Id":18670288,"CreationDate":"2013-03-17T16:29:00.000","Title":"Choosing the threshold values for hysteresis","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to combine different classifiers into one in sklearn? I find sklearn.ensamble package. It contains different models, like AdaBoost and RandofForest, but they use decision trees under the hood and I want to use different methods, like SVM and Logistic regression. Is it possible with sklearn?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":694,"Q_Id":15471372,"Users Score":2,"Answer":"Do you just want to do majority voting? This is not implemented afaik. But as I said, you can just average the predict_proba scores. Or you can use LabelBinarizer of the predictions and average those. That would implement a voting scheme.\nEven if you are not interested in the probabilities, averaging the predicted probabilities might be more robust than doing a simple voting. This is hard to tell without trying out, though.","Q_Score":1,"Tags":"python,machine-learning,scikit-learn","A_Id":15743100,"CreationDate":"2013-03-18T07:06:00.000","Title":"Ensamble methods with scikit-learn","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large dataset. It's currently in the form of uncompressed numpy array files that were created with numpy.array.tofile(). Each file is approximately 100000 rows of 363 floats each. There are 192 files totalling 52 Gb.\nI'd like to separate a random fifth of this data into a test set, and a random fifth of that test set into a validation set. \nIn addition, I can only train on 1 Gb at a time (limitation of GPU's onboard memory) So I need to randomize the order of all the data so that I don't introduce a bias by training on the data in the order it was collected.\nMy main memory is 8 Gb in size. Can any recommend a method of randomizing and partitioning this huge dataset?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":756,"Q_Id":15512276,"Users Score":0,"Answer":"You could assign a unique sequential number to each row, then choose a random sample of those numbers, then serially extract each relevant row to a new file.","Q_Score":0,"Tags":"python,numpy,dataset","A_Id":15513160,"CreationDate":"2013-03-19T23:12:00.000","Title":"How should I divide a large (~50Gb) dataset into training, test, and validation sets?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In an python application I'm developing I have an array of 3D points (of size between 2 and 100000) and I have to find the points that are within a certain distance from each other (say between two values, like 0.1 and 0.2). I need this for a graphic application and this search should be very fast (~1\/10 of a second for a sample of 10000 points)\nAs a first experiment I tried to use the scipy.spatial.KDTree.query_pairs implementation, and with a sample of 5000 point it takes 5 second to return the indices. Do you know any approach that may work for this specific case?\nA bit more about the application:\nThe points represents atom coordinates and the distance search is useful to determine the bonds between atoms. Bonds are not necessarily fixed but may change at each step, such as in the case of hydrogen bonds.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":772,"Q_Id":15514641,"Users Score":1,"Answer":"The first thing that comes to my mind is:\nIf we calculate the distance between each two atoms in the set it will be O(N^2) operations. It is very slow.\nWhat about to introduce the statical orthogonal grid with some cells size (for example close to the distance you are interested) and then determine the atoms belonging to the each cell of the grid (it takes O(N) operations) After this procedure you can reduce the time for searching of the neighbors.","Q_Score":0,"Tags":"python,performance,algorithm,numpy,kdtree","A_Id":15514922,"CreationDate":"2013-03-20T03:15:00.000","Title":"Finding points in space closer than a certain value","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In an python application I'm developing I have an array of 3D points (of size between 2 and 100000) and I have to find the points that are within a certain distance from each other (say between two values, like 0.1 and 0.2). I need this for a graphic application and this search should be very fast (~1\/10 of a second for a sample of 10000 points)\nAs a first experiment I tried to use the scipy.spatial.KDTree.query_pairs implementation, and with a sample of 5000 point it takes 5 second to return the indices. Do you know any approach that may work for this specific case?\nA bit more about the application:\nThe points represents atom coordinates and the distance search is useful to determine the bonds between atoms. Bonds are not necessarily fixed but may change at each step, such as in the case of hydrogen bonds.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":772,"Q_Id":15514641,"Users Score":5,"Answer":"Great question! Here is my suggestion:\nDivide each coordinate by your \"epsilon\" value of 0.1\/0.2\/whatever and round the result to an integer. This creates a \"quotient space\" of points where distance no longer needs to be determined using the distance formula, but simply by comparing the integer coordinates of each point. If all coordinates are the same, then the original points were within approximately the square root of three times epsilon from each other (for example). This process is O(n) and should take 0.001 seconds or less.\n(Note: you would want to augment the original point with the three additional integers that result from this division and rounding, so that you don't lose the exact coordinates.)\nSort the points in numeric order using dictionary-style rules and considering the three integers in the coordinates as letters in words. This process is O(n * log(n)) and should take certainly less than your 1\/10th of a second requirement.\nNow you simply proceed through this sorted list and compare each point's integer coordinates with the previous and following points. If all coordinates match, then both of the matching points can be moved into your \"keep\" list of points, and all the others can be marked as \"throw away.\" This is an O(n) process which should take very little time.\nThe result will be a subset of all the original points, which contains only those points that could be possibly involved in any bond, with a bond being defined as approximately epsilon or less apart from some other point in your original set.\nThis process is not mathematically exact, but I think it is definitely fast and suited for your purpose.","Q_Score":0,"Tags":"python,performance,algorithm,numpy,kdtree","A_Id":15514859,"CreationDate":"2013-03-20T03:15:00.000","Title":"Finding points in space closer than a certain value","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have received several recommendations to use virtualenv to clean up my python modules. I am concerned because it seems too good to be true. Has anyone found downside related to performance or memory issues in working with multicore settings, starcluster, numpy, scikit-learn, pandas, or iPython notebook.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1022,"Q_Id":15540640,"Users Score":0,"Answer":"There's no performance overhead to using virtualenv. All it's doing is using different locations in the filesystem.\nThe only \"overhead\" is the time it takes to set it up. You'd need to install each package in your virtualenv (numpy, pandas, etc.)","Q_Score":5,"Tags":"python-2.7,virtualenv,scientific-computing","A_Id":15540786,"CreationDate":"2013-03-21T06:10:00.000","Title":"Are there any downsides to using virtualenv for scientific python and machine learning?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have received several recommendations to use virtualenv to clean up my python modules. I am concerned because it seems too good to be true. Has anyone found downside related to performance or memory issues in working with multicore settings, starcluster, numpy, scikit-learn, pandas, or iPython notebook.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1022,"Q_Id":15540640,"Users Score":3,"Answer":"Virtualenv is the best and easiest way to keep some sort of order when it comes to dependencies. Python is really behind Ruby (bundler!) when it comes to dealing with installing and keeping track of modules. The best tool you have is virtualenv.\nSo I suggest you create a virtualenv directory for each of your applications, put together a file where you list all the 'pip install' commands you need to build the environment and ensure that you have a clean repeatable process for creating this environment.\nI think that the nature of the application makes little difference. There should not be any performance issue since all that virtualenv does is to load libraries from a specific path rather than load them from the directory where they are saved by default.\nIn any case (this may be completely irrelevant), but if performance is an issue, then perhaps you ought to be looking at a compiled language. Most likely though, any performance bottlenecks could be improved with better coding.","Q_Score":5,"Tags":"python-2.7,virtualenv,scientific-computing","A_Id":15540795,"CreationDate":"2013-03-21T06:10:00.000","Title":"Are there any downsides to using virtualenv for scientific python and machine learning?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using a python program to produce some data, plotting the data using matplotlib.pyplot and then displaying the figure in a latex file. \nI am currently saving the figure as a .png file but the image quality isn't great. I've tried changing the DPI in matplotlib.pyplot.figure(dpi=200) etc but this seems to make little difference. I've also tried using differnet image formats but they all look a little faded and not very sharp.\nHas anyone else had this problem? \nAny help would be much appreciated","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":18876,"Q_Id":15575466,"Users Score":24,"Answer":"You can save the images in a vector format so that they will be scalable without quality loss. Such formats are PDF and EPS. Just change the extension to .pdf or .eps and matplotlib will write the correct image format. Remember LaTeX likes EPS and PDFLaTeX likes PDF images. Although most modern LaTeX executables are PDFLaTeX in disguise and convert EPS files on the fly (same effect as if you included the epstopdf package in your preamble, which may not perform as well as you'd like).\nAlternatively, increase the DPI, a lot. These are the numbers you should keep in mind:\n\n300dpi: plain paper prints\n600dpi: professional paper prints. Most commercial office printers reach this in their output.\n1200dpi: professional poster\/brochure grade quality.\n\nI use these to adapt the quality of PNG figures in conjunction with figure's figsize option, which allows for correctly scaled text and graphics as you improve the quality through dpi.","Q_Score":18,"Tags":"python,graph,matplotlib","A_Id":15578952,"CreationDate":"2013-03-22T16:35:00.000","Title":"How do you improve matplotlib image quality?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow. \nCan point me some example python script or some idea how can I do the same?\nThank you","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4583,"Q_Id":15592980,"Users Score":0,"Answer":"You need to get the nzcli installed on the machine that you want to run nzload from - your sysadmin should be able to put it on your unix\/linux application server. There's a detailed process to setting it all up, caching the passwords, etc - the sysadmin should be able to do that to.\nOnce it is set up, you can create NZ control files to point to your data files and execute a load. The Netezza Data Loading guide has detailed instructions on how to do all of this (it can be obtained through IBM).\nYou can do it through aginity as well if you have the CREATE EXTERNAL TABLE privledge - you can do a INSERT INTO FROM EXTERNAL ... REMOTESOURCE ODBC to load the file from an ODBC connection.","Q_Score":2,"Tags":"python,netezza","A_Id":15643468,"CreationDate":"2013-03-23T22:45:00.000","Title":"How to use NZ Loader (Netezza Loader) through Python Script?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow. \nCan point me some example python script or some idea how can I do the same?\nThank you","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":4583,"Q_Id":15592980,"Users Score":1,"Answer":"you can use nz_load4 to load the data,This is the support utility \/nz\/support\/contrib\/bin\nthe syntax is same like nzload,by default nz_load4 will load the data using 4 thread and you can go upto 32 thread by using -tread option\nfor more details use nz_load4 -h \nThis will create the log files based on the number of thread,like if","Q_Score":2,"Tags":"python,netezza","A_Id":17522337,"CreationDate":"2013-03-23T22:45:00.000","Title":"How to use NZ Loader (Netezza Loader) through Python Script?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying desperately to make a fullscreen plot in matplotlib on Ubuntu 12.10. I have tried everything I can find on the web. I need my plot to go completely fullscreen, not just maximized. Has anyone ever gotten this to work? If so, could you please share how?\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1451,"Q_Id":15619825,"Users Score":1,"Answer":"SOLVED - My problem was that I was not up to the latest version of Matplotlib. I did the following steps to get fullscreen working in Matplotlib with Ubuntu 12.10.\n\nUninstalled matplotlib with sudo apt-get remove python-matplotlib\nInstalled build dependencies for matplotlib sudo apt-get build-dep python-matplotlib\nInstalled matplotlib 1.2 with pip sudo pip install matplotlib\nSet matplotlib to use the GTK backend with matplotlib.rcParams['backend'] = 'GTK'\nUsed keyboard shortcut 'f' when the plot was onscreen and it worked!","Q_Score":1,"Tags":"python,numpy,matplotlib,scipy","A_Id":15623721,"CreationDate":"2013-03-25T16:24:00.000","Title":"Matplotlib fullscreen not working","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two vectors x and y, and I want to compute a rolling regression for those, e.g a on (x(1:4),y(1:4)), (x(2:5),y(2:5)), ...\nIs there already a function for that? The best algorithm I have in mind for this is O(n), but applying separate linear regressions on every subarrays would be O(n^2).\nI'm working with Matlab and Python (numpy).","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4368,"Q_Id":15636796,"Users Score":2,"Answer":"No, there is NO function that will do a rolling regression, returning all the statistics you wish, doing it efficiently.\nThat does not mean you can't write such a function. To do so would mean multiple calls to a tool like conv or filter. This is how a Savitsky-Golay tool would work, which DOES do most of what you want. Make one call for each regression coefficient.\nUse of up-dating and down-dating tools to use\/modify the previous regression estimates will not be as efficient as the calls to conv, since you only need factorize a linear system ONCE when you then do the work with conv. Anyway, there is no need to do an update, as long as the points are uniformly spaced in the series. This is why Savitsky-Golay works.","Q_Score":2,"Tags":"python,matlab,numpy,linear-regression,rolling-computation","A_Id":15638779,"CreationDate":"2013-03-26T12:11:00.000","Title":"Efficient way to do a rolling linear regression","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a lot of data stored at disk in large arrays. I cant load everything in memory altogether.\nHow one could calculate the mean and the standard deviation?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":6151,"Q_Id":15638612,"Users Score":6,"Answer":"Sounds like a math question. For the mean, you know that you can take the mean of a chunk of data, and then take the mean of the means. If the chunks aren't the same size, you'll have to take a weighted average. \nFor the standard deviation, you'll have to calculate the variance first. I'd suggest doing this alongside the calculation of the mean. For variance, you have\nVar(X) = Avg(X^2) - Avg(X)^2\nSo compute the average of your data, and the average of your (data^2). Aggregate them as above, and the take the difference.\nThen the standard deviation is just the square root of the variance.\nNote that you could do the whole thing with iterators, which is probably the most efficient.","Q_Score":4,"Tags":"python,statistics","A_Id":15638712,"CreationDate":"2013-03-26T13:44:00.000","Title":"calculating mean and standard deviation of the data which does not fit in memory using python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I did a code which generates random numbers below and i save them in a csv which look like below, I am trying to play around and learn the group by function. I would like for instance do the sum or average of those group by timestamp. I am new in Python, i cannot find anywhere to start though. Ulitmately i would like to do the same but for 1min or 5min (every 5min starting from 00:00:00, not enough data in my example below but that would do something like 13:35:00 to 13:40:00 and the next one 13:40:00 included to 13:45:00 excluded, etc), i think i could figure out the 1min in extracting the minute part from the timestamp but the 5min seems complex. Not asking for a copy paste of a code, but i have no idea where to start to be honest.\n\nLevel Timestamp\n99 03\/04\/2013 13:37:20\n98 03\/04\/2013 13:37:20\n98 03\/04\/2013 13:37:20\n99 03\/04\/2013 13:37:20\n105 03\/04\/2013 13:37:20\n104 03\/04\/2013 13:37:20\n102 03\/04\/2013 13:37:21\n102 03\/04\/2013 13:37:21\n103 03\/04\/2013 13:37:22\n82 03\/04\/2013 13:37:23\n83 03\/04\/2013 13:37:23\n82 03\/04\/2013 13:37:23\n83 03\/04\/2013 13:37:23\n54 03\/04\/2013 13:37:24\n55 03\/04\/2013 13:37:24\n54 03\/04\/2013 13:37:24\n55 03\/04\/2013 13:37:24\n56 03\/04\/2013 13:37:25\n57 03\/04\/2013 13:37:25","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":584,"Q_Id":15790467,"Users Score":0,"Answer":"There are several ways to approach this, but you're effectively \"binning\" on the times. I would approach it in a few steps:\nYou don't want to parse the time yourself with string manipulation, it will blow up in your face; trust me! Parse out the timestamp into a datetime object (google should give you a pretty good answer). Once you have that, you can do lots of fun stuff like compare two times.\nNow that you have the datetime objects, you can start to \"bin\" them. I'm going to assume the records are in order. Start with the first record's time of \"03\/04\/2013 13:37:20\" and create a new datetime object at \"03\/04\/2013 13:37:00\" [hint: set seconds=0 on the datetime object you read in]. This is the start of your first \"bin\". Now add one minute to your start datetime [hint: endDT = startDT + timedelta(seconds=60)], that's the end of your first bin.\nNow start going through your records checking if the record is less than your endDT, if it is, add it to a list for that bin. If the record is greater than your endDT, you're in the next bin. To start the new bin, add one minute to your endDT and create a new list to hold those items and keep chugging along in your loop.\nOnce you go through the loop, you can run max\/min\/avg on the lists. Ideally, you'll store the lists in a dictionary that looks like {datetimeObject : [34, 23, 45, 23]}. It'll make printing and sorting easy.\nThis isn't the most efficient\/flexible\/cool way to do it, but I think it's probably the most helpful to start with.","Q_Score":2,"Tags":"python,group-by,timestamp,time-series","A_Id":15791843,"CreationDate":"2013-04-03T14:41:00.000","Title":"Grouping data by frequency","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I manage to install mapnik for python and am able to render maps using a provided shp file. Is there a possibility to retrieve dynamically the shape file for the map I want to render (given coordinates) from python? or do I need to download the whole OSM files, and import them into my own database? \nthanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":286,"Q_Id":15792073,"Users Score":0,"Answer":"Dynamic retrieval of data from shapefiles is not suggested for large applications. The best practice is to dump the shapefile in databases like postgres (shp2pgsql) & generated the map using mapnik & tile them using tilecache.","Q_Score":0,"Tags":"python,openstreetmap,shapefile,mapnik","A_Id":16271420,"CreationDate":"2013-04-03T15:52:00.000","Title":"openstreet maps: dynamically retrieve shp files in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Sorry if this has been answered somewhere already, I couldn't find the answer.\nI have installed python 2.7.3 onto a windows 7 computer. I then downloaded the pandas-0.10.1.win-amd64-py2.7.exe and tried to install it. I have gotten past the first window, but then it states \"Python 2.7 is required, which was not found in the registry\".\nI then get the option to put the path in to find python, but I cannot get it to work.\nHow would I fix this? Sorry for the silly question.\nThanks.\n~Kututo","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3609,"Q_Id":15832445,"Users Score":0,"Answer":"After you have installed python check to see if the appropriate path variables are set by typing the following at the command line:\necho %PATH%\nif you do not see something like:\nC:\\Python27;C:\\Python27\\Scripts \non the output (probably with lots of other paths) then type this:\nset PATH=%PATH%;C:\\\\Python27\\\\;C:\\\\Python27\\Scripts\nThen try installing the 32-bit pandas executable.","Q_Score":4,"Tags":"python,windows,installation,pandas","A_Id":15847067,"CreationDate":"2013-04-05T11:12:00.000","Title":"Cannot seem to install pandas for python 2.7 on windows","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Sorry if this has been answered somewhere already, I couldn't find the answer.\nI have installed python 2.7.3 onto a windows 7 computer. I then downloaded the pandas-0.10.1.win-amd64-py2.7.exe and tried to install it. I have gotten past the first window, but then it states \"Python 2.7 is required, which was not found in the registry\".\nI then get the option to put the path in to find python, but I cannot get it to work.\nHow would I fix this? Sorry for the silly question.\nThanks.\n~Kututo","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":3609,"Q_Id":15832445,"Users Score":2,"Answer":"I faced the same issue. Here is what worked\n\nChanged to PATH to include C:\\Python27;C:\\Python27\\Lib\\site-packages\\;C:\\Python27\\Scripts\\;\nuninstall 64bit numpy and pandas\ninstall 32win 2.7 numpy and pandas \nI had to also install dateutil and pytz\npandas and numpy work import work fine","Q_Score":4,"Tags":"python,windows,installation,pandas","A_Id":25210272,"CreationDate":"2013-04-05T11:12:00.000","Title":"Cannot seem to install pandas for python 2.7 on windows","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"pyplot.hist() documentation specifies that when setting a range for a histogram \"lower and upper outliers are ignored\".\nIs it possible to make the first and last bins of a histogram include all outliers without changing the width of the bin?\nFor example, let's say I want to look at the range 0-3 with 3 bins: 0-1, 1-2, 2-3 (let's ignore cases of exact equality for simplicity). I would like the first bin to include all values from minus infinity to 1, and the last bin to include all values from 2 to infinity. However, if I explicitly set these bins to span that range, they will be very wide. I would like them to have the same width. The behavior I am looking for is like the behavior of hist() in Matlab.\nObviously I can numpy.clip() the data and plot that, which will give me what I want. But I am interested if there is a builtin solution for this.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":7794,"Q_Id":15837810,"Users Score":8,"Answer":"No. Looking at matplotlib.axes.Axes.hist and the direct use of numpy.histogram I'm fairly confident in saying that there is no smarter solution than using clip (other than extending the bins that you histogram with).\nI'd encourage you to look at the source of matplotlib.axes.Axes.hist (it's just Python code, though admittedly hist is slightly more complex than most of the Axes methods) - it is the best way to verify this kind of question.","Q_Score":19,"Tags":"python,numpy,matplotlib","A_Id":16186805,"CreationDate":"2013-04-05T15:29:00.000","Title":"Making pyplot.hist() first and last bins include outliers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I have big dataset which has both strings and numerical values\nex.\nUser name (str) , handset(str), number of requests(int), number of downloads(int) ,.......\nI have around 200 such columns. \nIs there a way\/algorithm which can handle both strings and integers during feature selection ?\nOr how should I approach this issue.\nthanks","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1683,"Q_Id":15868108,"Users Score":0,"Answer":"Feature selection algorithms assigns weights to different features based on their impact in the classification. In my best knowledge the features types does not make difference when computing different weights. I suggest to convert string features to numerical based on their ASCII codes or any other techniques. Then you can use the existing feature selection algorithm in rapid miner.","Q_Score":3,"Tags":"python,machine-learning,weka,rapidminer,feature-selection","A_Id":15887123,"CreationDate":"2013-04-07T21:34:00.000","Title":"Feature Selection in dataset containing both string and numerical values?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I have big dataset which has both strings and numerical values\nex.\nUser name (str) , handset(str), number of requests(int), number of downloads(int) ,.......\nI have around 200 such columns. \nIs there a way\/algorithm which can handle both strings and integers during feature selection ?\nOr how should I approach this issue.\nthanks","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1683,"Q_Id":15868108,"Users Score":0,"Answer":"I've used Weka Feature Selection and although the attribute evaluator methods I've tried can't handle string attributes you can temporary remove them in the Preprocess > Filter > Unsupervised > Attribute > RemoveType, then perform the feature selection and, later, include strings again to do the classification.","Q_Score":3,"Tags":"python,machine-learning,weka,rapidminer,feature-selection","A_Id":17920216,"CreationDate":"2013-04-07T21:34:00.000","Title":"Feature Selection in dataset containing both string and numerical values?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I have big dataset which has both strings and numerical values\nex.\nUser name (str) , handset(str), number of requests(int), number of downloads(int) ,.......\nI have around 200 such columns. \nIs there a way\/algorithm which can handle both strings and integers during feature selection ?\nOr how should I approach this issue.\nthanks","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1683,"Q_Id":15868108,"Users Score":0,"Answer":"There are a set of operators you could use in the Attribute Weighting group within RapidMiner. For example, Weight By Correlation or Weight By Information Gain.\nThese will assess how much weight to give an attribute based on its relevance to the label (in this case the download flag). The resulting weights can then be used with the Select by Weights operator to eliminate those that are not needed. This approach considers attributes by themselves.\nYou could also build a classification model and use the forward selection operators to add more and more attributes and monitor performance. This approach will consider the relationships between attributes.","Q_Score":3,"Tags":"python,machine-learning,weka,rapidminer,feature-selection","A_Id":16003658,"CreationDate":"2013-04-07T21:34:00.000","Title":"Feature Selection in dataset containing both string and numerical values?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've trained a Random Forest (regressor in this case) model using scikit learn (python), and I'would like to plot the error rate on a validation set based on the numeber of estimators used. In other words, there's a way to predict using only a portion of the estimators in your RandomForestRegressor?\nUsing predict(X) will give you the predictions based on the mean of every single tree results. There is a way to limit the usage of the trees? Or eventually, get each single output for each single tree in the forest?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1415,"Q_Id":15869919,"Users Score":1,"Answer":"Once trained, you can access these via the \"estimators_\" attribute of the random forest object.","Q_Score":0,"Tags":"python,limit,scikit-learn,prediction,random-forest","A_Id":15892422,"CreationDate":"2013-04-08T01:28:00.000","Title":"Random Forest - Predict using less estimators","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the difference between ndarray and array in Numpy? And where can I find the implementations in the numpy source code?","AnswerCount":5,"Available Count":2,"Score":-0.0798297691,"is_accepted":false,"ViewCount":148479,"Q_Id":15879315,"Users Score":-2,"Answer":"I think with np.array() you can only create C like though you mention the order, when you check using np.isfortran() it says false. but with np.ndarrray() when you specify the order it creates based on the order provided.","Q_Score":340,"Tags":"python,arrays,numpy,multidimensional-array,numpy-ndarray","A_Id":52104659,"CreationDate":"2013-04-08T12:41:00.000","Title":"What is the difference between ndarray and array in numpy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the difference between ndarray and array in Numpy? And where can I find the implementations in the numpy source code?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":148479,"Q_Id":15879315,"Users Score":66,"Answer":"numpy.array is a function that returns a numpy.ndarray. There is no object type numpy.array.","Q_Score":340,"Tags":"python,arrays,numpy,multidimensional-array,numpy-ndarray","A_Id":15879428,"CreationDate":"2013-04-08T12:41:00.000","Title":"What is the difference between ndarray and array in numpy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using pyplot.bar but I'm plotting so many points that the color of the bars is always black. This is because the borders of the bars are black and there are so many of them that they are all squished together so that all you see is the borders (black). Is there a way to remove the bar borders so that I can see the intended color?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72444,"Q_Id":15904042,"Users Score":135,"Answer":"Set the edgecolor to \"none\": bar(..., edgecolor = \"none\")","Q_Score":75,"Tags":"python,graph,matplotlib,border","A_Id":15904277,"CreationDate":"2013-04-09T14:02:00.000","Title":"matplotlib bar graph black - how do I remove bar borders","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"how does ImageFilter in PIL normalize the pixel values(not the kernel) between 0 and 255 after filtering with Kernel or mask?(Specially zero-summing kernel like:( -1,-1,-1,0,0,0,1,1,1 ))\n\nmy code was like:\nimport Image\nimport ImageFilter\nHoriz = ImageFilter.Kernel((3, 3), (-1,-2,-1,0,0,0,1,2,1), scale=None, offset=0) #sobel mask\nim_fltd = myimage.filter(Horiz)","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1691,"Q_Id":15906368,"Users Score":1,"Answer":"The above answer of Mark states his theory regarding what happens when a Zero-summing kernel is used with scale argument 0 or None or not passed\/mentioned. Now talking about how PIL handles calculated pixel values after applying kernel,scale and offset, which are not in [0,255] range. My theory about how it normalizes calculated pixel value is that it simply do: Any resulting value <= 0 becomes 0 and anything > 255 becomes 255.","Q_Score":3,"Tags":"image,image-processing,python-2.7,python-imaging-library","A_Id":15957090,"CreationDate":"2013-04-09T15:40:00.000","Title":"how does ImageFilter in PIL normalize the pixel values between 0 and 255 after filtering with Kernel or mask","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working with hundreds of files, all of which I want to read in and view as a numpy array. Right now I am using os.walk to pull all the files from a directory. I have a for loop that goes through the directory and will then create the array, but it is not stored anywhere. Is there a way to create arrays \"on the go\" or to somehow allocate a certain amount of memory for empty arrays?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":15915255,"Users Score":0,"Answer":"Python's lists are dynamic, you can change their length on-the-fly, so just store them in a list. Or if you wanted to reference them by name instead of number, use a dictionary, whose size can also change on the fly.","Q_Score":0,"Tags":"python,arrays,os.walk","A_Id":15915785,"CreationDate":"2013-04-10T01:08:00.000","Title":"Creating many arrays at once","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"There are many functions in OpenCV 2.4 not available using Python. \nPlease advice me how to convert the C++ functions so that I can use in Python 2.7.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":15924060,"Users Score":0,"Answer":"You should have a look at Python Boost. This might help you to bind C++ functions you need to python.","Q_Score":1,"Tags":"python,opencv","A_Id":15925696,"CreationDate":"2013-04-10T11:03:00.000","Title":"using python for opencv","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a text classification problem using scikit-learn classifiers and text feature extractor, particularly TfidfVectorizer class.\nThe problem is that I have two kinds of features, the first are captured by the n-grams obtained from TfidfVectorizer and the other are domain specific features that I extract from each document. I need to combine both features in a single feature vector for each document; to do this I need to update the scipy sparse matrix returned by TfidfVectorizer by adding a new dimension in each row holding the domain feature for this document. However, I can't find a neat way to do this, by neat I mean not converting the sparse matrix into a dense one since simply it won't fit in memory.\nProbably I am missing a feature in scikit-learn or something, since I am new to both scipy and scikit-learn.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":692,"Q_Id":15938025,"Users Score":5,"Answer":"I think the easiest would be to create a new sparse matrix with your custom features and then use scipy.sparse.hstack to stack the features.\nYou might also find the \"FeatureUnion\" from the pipeline module helpful.","Q_Score":5,"Tags":"python-2.7,scipy,sparse-matrix,scikit-learn","A_Id":15949294,"CreationDate":"2013-04-10T23:02:00.000","Title":"How to Extend Scipy Sparse Matrix returned by sklearn TfIdfVectorizer to hold more features","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I get the number of rows of a pandas dataframe df?","AnswerCount":15,"Available Count":1,"Score":0.0532828229,"is_accepted":false,"ViewCount":3270072,"Q_Id":15943769,"Users Score":4,"Answer":"Either of this can do it (df is the name of the DataFrame):\nMethod 1: Using the len function:\nlen(df) will give the number of rows in a DataFrame named df.\nMethod 2: using count function:\ndf[col].count() will count the number of rows in a given column col.\ndf.count() will give the number of rows for all the columns.","Q_Score":1569,"Tags":"python,pandas,dataframe","A_Id":61413025,"CreationDate":"2013-04-11T08:14:00.000","Title":"How do I get the row count of a Pandas DataFrame?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running Python 2.7.2 on my machine. I am trying to install numpy with easy_install and pip, but none of them are able to do so. So, when I try:\nsudo easy_install-2.7 numpy\nI get this error:\n\"The package setup script has attempted to modify files on your system\nthat are not within the EasyInstall build area, and has been aborted.\nThis package cannot be safely installed by EasyInstall, and may not\nsupport alternate installation locations even if you run its setup\nscript by hand. Please inform the package's author and the EasyInstall\nmaintainers to find out if a fix or workaround is available.\"\nMoreover, when I try with pip:\nsudo pip-2.7 install numpy\nI get this error:\nRuntimeError: Broken toolchain: cannot link a simple C program\nIs there any fix available for this?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":5561,"Q_Id":15957071,"Users Score":3,"Answer":"I was facing the same error while installing the requirements for my django project. This worked for me.\nUpgrade your setuptools version via pip install --upgrade setuptools and run the command for installing the packages again.","Q_Score":3,"Tags":"python,numpy,pip,easy-install","A_Id":59262156,"CreationDate":"2013-04-11T19:21:00.000","Title":"easy_install and pip giving errors when trying to install numpy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a relatively large text-based web classification problem and I am planning on using the multinomial Naive Bayes classifier in sklearn in python and the scrapy framework for the crawling. However, I am a little concerned that sklearn\/python might be too slow for a problem that could involve classifications of millions of websites. I have already trained the classifier on several thousand websites from DMOZ. \nThe research framework is as follows:\n1) The crawler lands on a domain name and scrapes the text from 20 links on the site (of depth no larger than one). (The number of tokenized words here seems to vary between a few thousand to up to 150K for a sample run of the crawler)\n2) Run the sklearn multionmial NB classifier with around 50,000 features and record the domain name depending on the result\nMy question is whether a Python-based classifier would be up to the task for such a large scale application or should I try re-writing the classifier (and maybe the scraper and word tokenizer as well) in a faster environment? If yes what might that environment be?\nOr perhaps Python is enough if accompanied with some parallelization of the code?\nThanks","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":940,"Q_Id":15989610,"Users Score":5,"Answer":"Use the HashingVectorizer and one of the linear classification modules that supports the partial_fit API for instance SGDClassifier, Perceptron or PassiveAggresiveClassifier to incrementally learn the model without having to vectorize and load all the data in memory upfront and you should not have any issue in learning a classifier on hundreds of millions of documents with hundreds of thousands (hashed) features.\nYou should however load a small subsample that fits in memory (e.g. 100k documents) and grid search good parameters for the vectorizer using a Pipeline object and the RandomizedSearchCV class of the master branch. You can also fine tune the value of the regularization parameter (e.g. C for PassiveAggressiveClassifier or alpha for SGDClassifier) using the same RandomizedSearchCVor a larger, pre-vectorized dataset that fits in memory (e.g. a couple of millions of documents).\nAlso linear models can be averaged (average the coef_ and intercept_ of 2 linear models) so that you can partition the dataset, learn linear models independently and then average the models to get the final model.","Q_Score":5,"Tags":"python,scrapy,classification,scikit-learn","A_Id":15998577,"CreationDate":"2013-04-13T15:44:00.000","Title":"Using sklearn and Python for a large application classification\/scraping exercise","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In Python, are there any advantages \/ disadvantages of working with a list of lists versus working with a dictionary, more specifically when doing numerical operations with them? I'm writing a class of functions to solve simple matrix operations for my linear algebra class. I was using dictionaries, but then I saw that numpy uses list of lists instead, so I guess there must be some advantages in it.\nExample: [[1,2,3],[4,5,6],[7,8,9]] as opposed to {0:[1,2,3],1:[4,5,6],2:[7,8,9]}","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":20631,"Q_Id":15990456,"Users Score":12,"Answer":"When the keys of the dictionary are 0, 1, ..., n, a list will be faster, since no hashing is involved. As soon as the keys are not such a sequence, you need to use a dict.","Q_Score":14,"Tags":"python,arrays,list,math,dictionary","A_Id":15990493,"CreationDate":"2013-04-13T17:00:00.000","Title":"List of lists vs dictionary","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to generate a radial basis function where the input variables are defined at runtime. The SciPy.interpolate.Rbf function seems to request discrete lists for each input and output variable, eg: rbf(x,y,z). This restricts you to defining fixed variables before hand. \nI have tried unsuccessfully to pass a list or array of variables to the Rbf function (eg. rbf(list(x,...)) with no success. Has anyone else found a solution to this problem using this Rbf library? I would like to avoid switching to a different library or rewriting one if possible. Is there a way to generate discrete variables at runtime to feed into a function?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":512,"Q_Id":16063698,"Users Score":0,"Answer":"After looking through the source for the SciPy function, I will just subclass it and override init where the individual inputs are combined into an array anyway.","Q_Score":0,"Tags":"python,scipy","A_Id":16064603,"CreationDate":"2013-04-17T15:07:00.000","Title":"Alternative inputs to SciPy Radial Basis Functions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have a pandas DataFrame with two columns named 'A' and 'B'.\nNow suppose I also have a dictionary with keys 'A' and 'B', and the dictionary points to a scalar. That is, dict['A'] = 1.2 and similarly for 'B'.\nIs there a simple way to multiply each column of the DataFrame by these scalars?\nCheers!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":9702,"Q_Id":16112209,"Users Score":5,"Answer":"As Wouter said, the recommended method is to convert the dict to a pandas.Series and multiple the two objects together:\nresult = df * pd.Series(myDict)","Q_Score":4,"Tags":"python,pandas","A_Id":16225932,"CreationDate":"2013-04-19T19:32:00.000","Title":"Multiplying Columns by Scalars in Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am plotting some data using pylab and everything works perfect as I expect. I have 6 different graphs to plot and I can individually plot them in separate figures. But when I try to subplot() these graphs, the last one (subplot(3,2,6)) doesn't show anything.\nWhat confuses me is that this 6th graph is drawn perfectly when put in a separate figure but not in the subplot - with identical configurations.\nAny ideas what may be causing the problem ?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":375,"Q_Id":16133206,"Users Score":1,"Answer":"I found out that subplot() should be called before the plot(), issue resolved.","Q_Score":1,"Tags":"python,matplotlib,plot","A_Id":16133372,"CreationDate":"2013-04-21T16:07:00.000","Title":"Python Pylab subplot bug?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to save multiple numpy arrays along with the user input that was used to compute the data these arrays contain in a single file. I'm having a hard time finding a good procedure to use to achieve this or even what file type to use. The only thing i can think of is too put the computed arrays along with the user input into one single array and then save it using numpy.save. Does anybody know any better alternatives or good file types for my use?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":106,"Q_Id":16149187,"Users Score":0,"Answer":"I had this problem long ago so i dont have the code near to show you, but i used a binary write in a tmp file to get that done.\nEDIT: Thats is, pickle is what i used. Thanks SpankMe and RoboInventor","Q_Score":0,"Tags":"python,file,numpy","A_Id":16149290,"CreationDate":"2013-04-22T14:06:00.000","Title":"Saving python data for an application","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to save multiple numpy arrays along with the user input that was used to compute the data these arrays contain in a single file. I'm having a hard time finding a good procedure to use to achieve this or even what file type to use. The only thing i can think of is too put the computed arrays along with the user input into one single array and then save it using numpy.save. Does anybody know any better alternatives or good file types for my use?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":106,"Q_Id":16149187,"Users Score":2,"Answer":"How about using pickle and then storing pickled array objects in a storage of your choice, like database or files?","Q_Score":0,"Tags":"python,file,numpy","A_Id":16149283,"CreationDate":"2013-04-22T14:06:00.000","Title":"Saving python data for an application","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a numerical matrix of 2500*2500. To calculate the MIC (maximal information coefficient) for each pair of vectors, I am using minepy.MINE, but this is taking forever, can I make it faster?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":425,"Q_Id":16171519,"Users Score":1,"Answer":"first, use the latest version of minepy. Second, you can use a smaller value of \"alpha\" parameter, say 0.5 or 0.45. In this way, you will reduce the computational time in despite of characteristic matrix accuracy.\nDavide","Q_Score":0,"Tags":"python,python-2.7","A_Id":16401807,"CreationDate":"2013-04-23T14:07:00.000","Title":"how to make minepy.MINE run faster?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am drawing a histogram using matplotlib in python, and would like to draw a line representing the average of the dataset, overlaid on the histogram as a dotted line (or maybe some other color would do too). Any ideas on how to draw a line overlaid on the histogram?\nI am using the plot() command, but not sure how to draw a vertical line (i.e. what value should I give for the y-axis?\nthanks!","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":127120,"Q_Id":16180946,"Users Score":2,"Answer":"I would look at the largest value in your data set (i.e. the histogram bin values) multiply that value by a number greater than 1 (say 1.5) and use that to define the y axis value. This way it will appear above your histogram regardless of the values within the histogram.","Q_Score":90,"Tags":"python,matplotlib,axis","A_Id":16180974,"CreationDate":"2013-04-23T23:35:00.000","Title":"Drawing average line in histogram (matplotlib)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have just come across an interesting interview style type of question which I couldn't get my head around. \nBasically, given a number to alphabet mapping such that [1:A, 2:B, 3:C ...], print out all possible combinations.\nFor instance \"123\" will generate [ABC, LC, AW] since it can be separated into 12,3 and 1,23. \nI'm thinking it has to be some type of recursive function where it checks with windows of size 1 and 2 and appending to a previous result if it's a valid letter mapping. \nIf anyone can formulate some pseudo\/python code that'd be much appreciated.","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":1582,"Q_Id":16183941,"Users Score":1,"Answer":"As simple as a tree \nLet suppose you have give \"1261\"\nConstruct a tree with it a Root .\nBy defining the node(left , right ) , where left is always direct map and right is combo \nversion suppose for the if you take given Number as 1261\n1261 -> \n(1(261) ,12(61)) -> 1 is left-node(direct map -> a) 12 is right node(combo-map1,2->L)\n(A(261) , L(61)) ->\n(A(2(61),26(1))) ,L(6(1)) ->\n(A(B(6(1)),Z(1)) ,L(F(1))) ->\n(A(B(F(1)),Z(A)) ,L(F(A))) ->\n(A(B(F(A)),Z(A)) ,L(F(A)))\nso now you have got all the leaf node..\njust print all paths from root to leaf node , this gives you all possible combinations .\nlike in this case\nABFA , AZA , LFA\nSo once you are done with the construction of tree just print all paths from root to node \nwhich is your requirement .","Q_Score":1,"Tags":"python,algorithm,alphabetical","A_Id":16186125,"CreationDate":"2013-04-24T05:21:00.000","Title":"How to generate a list of all possible alphabetical combinations based on an input of numbers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a very simple program (single-threaded) where we we do a bunch of random sample generation. For this we are using several calls of the numpy random functions (like normal or random_sample). Sometimes the result of one random call determines the number of times another random function is called.\nNow I want to set a seed in the beginning s.th. multiple runs of my program should yield the same result. For this I'm using an instance of the numpy class RandomState. While this is the case in the beginning, at some time the results become different and this is why I'm wondering.\nWhen I am doing everything correctly, having no concurrency and thereby a linear call of the functions AND no other random number generator involded, why does it not work?","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":1368,"Q_Id":16220585,"Users Score":-1,"Answer":"If reproducibility is very important to you, I'm not sure I'd fully trust any PRNG to always produce the same output given the same seed. You might consider capturing the random numbers in one phase, saving them for reuse; then in a second phase, replay the random numbers you've captured. That's the only way to eliminate the possibility of non-reproducibility -- and it solves your current problem too.","Q_Score":3,"Tags":"python,random,numpy,prng","A_Id":16223497,"CreationDate":"2013-04-25T16:57:00.000","Title":"Python numpy - Reproducibility of random numbers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a very simple program (single-threaded) where we we do a bunch of random sample generation. For this we are using several calls of the numpy random functions (like normal or random_sample). Sometimes the result of one random call determines the number of times another random function is called.\nNow I want to set a seed in the beginning s.th. multiple runs of my program should yield the same result. For this I'm using an instance of the numpy class RandomState. While this is the case in the beginning, at some time the results become different and this is why I'm wondering.\nWhen I am doing everything correctly, having no concurrency and thereby a linear call of the functions AND no other random number generator involded, why does it not work?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1368,"Q_Id":16220585,"Users Score":5,"Answer":"Okay, David was right. The PRNGs in numpy work correctly. Throughout every minimal example I created, they worked as they are supposed to.\nMy problem was a different one, but finally I solved it. Do never loop over a dictionary within a deterministic algorithm. It seems that Python orders the items arbitrarily when calling the .item() function for getting in iterator.\nSo I am not that disappointed that this was this kind of error, because it is a useful reminder of what to think about when trying to do reproducible simulations.","Q_Score":3,"Tags":"python,random,numpy,prng","A_Id":16296438,"CreationDate":"2013-04-25T16:57:00.000","Title":"Python numpy - Reproducibility of random numbers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am just starting out with nltk, and I am following the book. Chapter six is about text classification, and i am a bit confused about something. In the examples (the names, and movie reviews) the classifier is trained to select between two well-defined labels (male-female, and pos-neg). But how to train if you have only one label. \nSay I have a bunch of movie plot outlines, and I am only interested in fishing out movies from the sci-fi genre. Can I train a classifier to only recognize sci-fi plots, en say f.i. if classification confidence is > 80%, then put it in the sci-fi group, otherwise, just ignore it.\nHope somebody can clarify, thank you,","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":441,"Q_Id":16230984,"Users Score":0,"Answer":"I see two questions \n\nHow to train the system?\nCan the system consist of \"sci-fi\" and \"others\"?\n\nThe answer to 2 is yes. Having a 80% confidence threshold idea also makes sense, as long as you see with your data, features and algorithm that 80% is a good threshold. (If not, you may want to consider lowering it if not all sci-fi movies are being classified as sci-fi, or lowering it, if too many non-sci-fi movies are being categorized as sci-fi.)\nThe answer to 1 depends on the data you have, the features you can extract, etc. Jared's approach seems reasonable. Like Jared, I'd also to emphasize the importance of enough and representative data.","Q_Score":0,"Tags":"python,machine-learning,nlp,classification,nltk","A_Id":16231323,"CreationDate":"2013-04-26T07:29:00.000","Title":"train nltk classifier for just one label","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am just starting out with nltk, and I am following the book. Chapter six is about text classification, and i am a bit confused about something. In the examples (the names, and movie reviews) the classifier is trained to select between two well-defined labels (male-female, and pos-neg). But how to train if you have only one label. \nSay I have a bunch of movie plot outlines, and I am only interested in fishing out movies from the sci-fi genre. Can I train a classifier to only recognize sci-fi plots, en say f.i. if classification confidence is > 80%, then put it in the sci-fi group, otherwise, just ignore it.\nHope somebody can clarify, thank you,","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":441,"Q_Id":16230984,"Users Score":0,"Answer":"You can simply train a binary classifier to distinguish between sci-fi and not sci-fi\nSo train on the movie plots that are labeled as sci-fi and also on a selection of all other genres. It might be a good idea to have a representative sample of the same size for the other genres such that not all are of the romantic comedy genre, for instance.","Q_Score":0,"Tags":"python,machine-learning,nlp,classification,nltk","A_Id":16231216,"CreationDate":"2013-04-26T07:29:00.000","Title":"train nltk classifier for just one label","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use function np.random.curve_fit(x,a,b,c,...,z) with a big but fixed number of fitting parameters. Is it possible to use tuples or lists here for shortness, like np.random.curve_fit(x,P), where P=(a,b,c,...,z)?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":77,"Q_Id":16236652,"Users Score":4,"Answer":"Well, to convert your example, you would use np.random.normal(x, *P). However, np.random.normal(x,a,b,c,...,z) wouldn't actually work. Maybe you meant another function?","Q_Score":0,"Tags":"python","A_Id":16236700,"CreationDate":"2013-04-26T12:37:00.000","Title":"list instead of separated arguments in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My objective is to cluster words based on how similar they are with respect to a corpus of text documents. I have computed Jaccard Similarity between every pair of words. In other words, I have a sparse distance matrix available with me. Can anyone point me to any clustering algorithm (and possibly its library in Python) which takes distance matrix as input ? I also do not know the number of clusters beforehand. I only want to cluster these words and obtain which words are clustered together.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28598,"Q_Id":16246066,"Users Score":0,"Answer":"Recommend to take a look at agglomerative clustering.","Q_Score":24,"Tags":"python,cluster-computing,scikit-learn,hierarchical-clustering","A_Id":54499731,"CreationDate":"2013-04-26T22:19:00.000","Title":"Clustering words based on Distance Matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What would be the most efficient algorithm to solve a linear equation in one variable given as a string input to a function? For example, for input string: \n\n\"x + 9 \u2013 2 - 4 + x = \u2013 x + 5 \u2013 1 + 3 \u2013 x\" \n\nThe output should be 1.\nI am considering using a stack and pushing each string token onto it as I encounter spaces in the string. If the input was in polish notation then it would have been easier to pop numbers off the stack to get to a result, but I am not sure what approach to take here.\nIt is an interview question.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":3057,"Q_Id":16273351,"Users Score":1,"Answer":"The first thing is to parse the string, to identify the various tokens (numbers, variables and operators), so that an expression tree can be formed by giving operator proper precedences.\nRegular expressions can help, but that's not the only method (grammar parsers like boost::spirit are good too, and you can even run your own: its all a \"find and recourse\").\nThe tree can then be manipulated reducing the nodes executing those operation that deals with constants and by grouping variables related operations, executing them accordingly.\nThis goes on recursively until you remain with a variable related node and a constant node.\nAt the point the solution is calculated trivially.\nThey are basically the same principles that leads to the production of an interpreter or a compiler.","Q_Score":1,"Tags":"c++,python,algorithm,linear-algebra","A_Id":16273799,"CreationDate":"2013-04-29T07:30:00.000","Title":"Solving a linear equation in one variable","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to generate a random number between 0.1 and 1.0.\nWe can't use rand.randint because it returns integers.\nWe have also tried random.uniform(0.1,1.0), but it returns a value >= 0.1 and < 1.0, we can't use this, because our search includes also 1.0.\nDoes somebody else have an idea for this problem?","AnswerCount":10,"Available Count":2,"Score":0.0199973338,"is_accepted":false,"ViewCount":37037,"Q_Id":16288749,"Users Score":1,"Answer":"Try \n random.randint(1, 10)\/100.0","Q_Score":26,"Tags":"python,random,floating-point","A_Id":31707656,"CreationDate":"2013-04-29T21:43:00.000","Title":"Generate random number between 0.1 and 1.0. Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to generate a random number between 0.1 and 1.0.\nWe can't use rand.randint because it returns integers.\nWe have also tried random.uniform(0.1,1.0), but it returns a value >= 0.1 and < 1.0, we can't use this, because our search includes also 1.0.\nDoes somebody else have an idea for this problem?","AnswerCount":10,"Available Count":2,"Score":0.0199973338,"is_accepted":false,"ViewCount":37037,"Q_Id":16288749,"Users Score":1,"Answer":"The standard way would be random.random() * 0.9 + 0.1 (random.uniform() internally does just this). This will return numbers between 0.1 and 1.0 without the upper border.\nBut wait! 0.1 (aka \u00b9\/\u2081\u2080) has no clear binary representation (as \u2153 in decimal)! So You won't get a true 0.1 anyway, simply because the computer cannot represent it internally. Sorry ;-)","Q_Score":26,"Tags":"python,random,floating-point","A_Id":16289924,"CreationDate":"2013-04-29T21:43:00.000","Title":"Generate random number between 0.1 and 1.0. Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is a good way to sample integers in the range {0,...,n-1} according to (a discrete version of) the exponential distribution? random.expovariate(lambd) returns a real number from 0 to positive infinity.\nUpdate. Changed title to make it more accurate.","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":2007,"Q_Id":16317420,"Users Score":-1,"Answer":"The simple answer is: pick a random number from geometric distribution and return mod n.\nEg: random.geometric(p)%n\nP(x) = p(1-p)^x+ p(1-p)^(x+n) + p(1-p)^(x+2n) ....\n= p(1-p)^x *(1+(1-p)^n +(1-p)^(2n) ... )\nNote that second part is a constant for a given p and n. The first part is geometric.","Q_Score":2,"Tags":"python,math","A_Id":16319018,"CreationDate":"2013-05-01T11:37:00.000","Title":"Sample integers from truncated geometric distribution","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've come up with 2 methods to generate relatively short random strings- one is much faster and simpler and the other much slower but I think more random. Is there a not-super-complicated method or way to measure how random the data from each method might be?\nI've tried compressing the output strings (via zlib) figuring the more truly random the data, the less it will compress but that hasn't proved much.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":740,"Q_Id":16320412,"Users Score":0,"Answer":"You can use some mapping to convert strings to numeric and then apply standard tests like Diehard and TestU01. Note that long sequences of samples are needed (typically few MB files will do)","Q_Score":10,"Tags":"python,random","A_Id":16320580,"CreationDate":"2013-05-01T14:49:00.000","Title":"Quantifying randomness","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've come up with 2 methods to generate relatively short random strings- one is much faster and simpler and the other much slower but I think more random. Is there a not-super-complicated method or way to measure how random the data from each method might be?\nI've tried compressing the output strings (via zlib) figuring the more truly random the data, the less it will compress but that hasn't proved much.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":740,"Q_Id":16320412,"Users Score":0,"Answer":"An outcome is considered random if it can't be predicted ahead of time with certainty. If it can be predicted with certainty it is considered deterministic. This is a binary categorization, outcomes either are deterministic or random, there aren't degrees of randomness. There are, however, degrees of predictability. One measure of predictability is entropy, as mentioned by EMS.\nConsider two games. You don't know on any given play whether you're going to win or lose. In game 1, the probability of winning is 1\/2, i.e., you win about half the time in the long run. In game 2, the odds of winning are 1\/100. Both games are considered random, because the outcome isn't a dead certainty. Game 1 has greater entropy than game 2, because the outcome is less predictable - while there's a chance of winning, you're pretty sure you're going to lose on any given trial.\nThe amount of compression that can be achieved (by a good compression algorithm) for a sequence of values is related to the entropy of the sequence. English has pretty low entropy (lots of redundant info both in the relative frequency of letters and the sequences of words that occur as groups), and hence tends to compress pretty well.","Q_Score":10,"Tags":"python,random","A_Id":16328721,"CreationDate":"2013-05-01T14:49:00.000","Title":"Quantifying randomness","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a csv file of [66k, 56k] size (rows, columns). Its a sparse matrix. I know that numpy can handle that size a matrix. I would like to know based on everyone's experience, how many features scikit-learn algorithms can handle comfortably?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2840,"Q_Id":16326699,"Users Score":1,"Answer":"Some linear model (Regression, SGD, Bayes) will probably be your best bet if you need to train your model frequently. \nAlthough before you go running any models you could try the following \n1) Feature reduction. Are there features in your data that could easily be removed? For example if your data is text or ratings based there are lots known options available. \n2) Learning curve analysis. Maybe you only need a small subset of your data to train a model, and after that you are only fitting to your data or gaining tiny increases in accuracy. \nBoth approaches could allow you to greatly reduce the training data required.","Q_Score":6,"Tags":"python,numpy,machine-learning,scipy,scikit-learn","A_Id":16332805,"CreationDate":"2013-05-01T21:18:00.000","Title":"How many features can scikit-learn handle?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using mayavi.mlab to display 3D data extracted from images. The data is as follows:\n\n3D camera parameters as 3 lines in the x, y, x direction around the camera center, usually for about 20 cameras using mlab.plot3d().\n3D coloured points in space for about 4000 points using mlab.points3d().\n\nFor (1) I have a function to draw each line for each camera seperately. If I am correct, all these lines are added to the mayavi pipeline for the current scene. Upon mlab.show() the scene takes about 10 seconds to render all these lines.\nFor (2) I couldn't find a way to plot all the points at once with each point a different color, so at the moment I iterate with mlab.points3d(x,y,z, color = color). I have newer waited for this routine to finish as it takes to long. If I plot all the points at once with the same color, it takes about 2 seconds.\nI already tried to start my script with fig.scene.disable_render = True and resetting fig.scene.disable_render = False before displaying the scene with mlab.show().\nHow can I display my data with mayavi within a reasonable waiting time?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1606,"Q_Id":16364311,"Users Score":2,"Answer":"The general principle is that vtk objects have a lot of overhead, and so you for rendering performance you want to pack as many things into one object as possible. When you call mlab convenience functions like points3d it creates a new vtk object to handle that data. Thus iterating and creating thousands of single points as vtk objects is a very bad idea.\nThe trick of temporarily disabling the rendering as in that other question -- the \"right\" way to do it is to have one VTK object that holds all of the different points.\nTo set the different points as different colors, give scalar values to the vtk object.\nx,y,z=np.random.random((3,100))\n some_data=mlab.points3d(x,y,z,colormap='cool')\n some_data.mlab_source.dataset.point_data.scalars=np.random.random((100,))\nThis only works if you can adequately represent the color values you need in a colormap. This is easy if you need a small finite number of colors or a small finite number of simple colormaps, but very difficult if you need completely arbitrary colors.","Q_Score":7,"Tags":"python,mayavi","A_Id":17346232,"CreationDate":"2013-05-03T17:13:00.000","Title":"Render a mayavi scene with a large pipeline faster","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a function f(x_1, x_2, ..., x_n) where n >= 1 that I would like to integrate. What algorithm should I use to provide a decently stable \/ accurate solution?\nI would like to program it in Python so any open source examples are more than welcome!\n(I realize that I should use a library but this is just a learning exercise.)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":270,"Q_Id":16382019,"Users Score":2,"Answer":"It depends on your context and the performance criteria. I assume that you are looking for a numerical approximation (as opposed to a algebraic integration)\nA Riemann Sum is the standard 'educational' way of numerically calculating integrals but several computationally more efficient algorithms exist.","Q_Score":0,"Tags":"python,algorithm,math,integral","A_Id":16382307,"CreationDate":"2013-05-05T06:30:00.000","Title":"How do I integrate a multivariable function?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to delete the first three rows of a dataframe in pandas.\nI know df.ix[:-1] would remove the last row, but I can't figure out how to remove first n rows.","AnswerCount":8,"Available Count":2,"Score":0.1243530018,"is_accepted":false,"ViewCount":412922,"Q_Id":16396903,"Users Score":5,"Answer":"inp0= pd.read_csv(\"bank_marketing_updated_v1.csv\",skiprows=2)\nor if you want to do in existing dataframe\nsimply do following command","Q_Score":249,"Tags":"python,pandas","A_Id":61941548,"CreationDate":"2013-05-06T10:35:00.000","Title":"Delete the first three rows of a dataframe in pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to delete the first three rows of a dataframe in pandas.\nI know df.ix[:-1] would remove the last row, but I can't figure out how to remove first n rows.","AnswerCount":8,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":412922,"Q_Id":16396903,"Users Score":9,"Answer":"A simple way is to use tail(-n) to remove the first n rows\ndf=df.tail(-3)","Q_Score":249,"Tags":"python,pandas","A_Id":52984033,"CreationDate":"2013-05-06T10:35:00.000","Title":"Delete the first three rows of a dataframe in pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a bunch of code that deals with document clustering. One step involves calculating the similarity (for some unimportant definition of \"similar\") of every document to every other document in a given corpus, and storing the similarities for later use. The similarities are bucketed, and I don't care what the specific similarity is for purposes of my analysis, just what bucket it's in. For example, if documents 15378 and 3278 are 52% similar, the ordered pair (3278, 15378) gets stored in the [0.5,0.6) bucket. Documents sometimes get either added or removed from the corpus after initial analysis, so corresponding pairs get added to or removed from the buckets as needed.\nI'm looking at strategies for storing these lists of ID pairs. We found a SQL database (where most of our other data for this project lives) to be too slow and too large disk-space-wise for our purposes, so at the moment we store each bucket as a compressed list of integers on disk (originally zlib-compressed, but now using lz4 instead for speed). Things I like about this:\n\nReading and writing are both quite fast\nAfter-the-fact additions to the corpus are fairly straightforward to add (a bit less so for lz4 than for zlib because lz4 doesn't have a framing mechanism built in, but doable)\nAt both write and read time, data can be streamed so it doesn't need to be held in memory all at once, which would be prohibitive given the size of our corpora\n\nThings that kind of suck:\n\nDeletes are a huge pain, and basically involve streaming through all the buckets and writing out new ones that omit any pairs that contain the ID of a document that's been deleted\nI suspect I could still do better both in terms of speed and compactness with a more special-purpose data structure and\/or compression strategy\n\nSo: what kinds of data structures should I be looking at? I suspect that the right answer is some kind of exotic succinct data structure, but this isn't a space I know very well. Also, if it matters: all of the document IDs are unsigned 32-bit ints, and the current code that handles this data is written in C, as Python extensions, so that's probably the general technology family we'll stick with if possible.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":704,"Q_Id":16426469,"Users Score":1,"Answer":"How about using one hash table or B-tree per bucket?\nOn-disk hashtables are standard. Maybe the BerkeleyDB libraries (availabe in stock python) will work for you; but be advised that they since they come with transactions they can be slow, and may require some tuning. There are a number of choices: gdbm, tdb that you should all give a try. Just make sure you check out the API and initialize them with appropriate size. Some will not resize automatically, and if you feed them too much data their performance just drops a lot.\nAnyway, you may want to use something even more low-level, without transactions, if you have a lot of changes.\nA pair of ints is a long - and most databases should accept a long as a key; in fact many will accept arbitrary byte sequences as keys.","Q_Score":6,"Tags":"python,c,data-structures,integer","A_Id":16444230,"CreationDate":"2013-05-07T18:54:00.000","Title":"Data structure options for efficiently storing sets of integer pairs on disk?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a bunch of code that deals with document clustering. One step involves calculating the similarity (for some unimportant definition of \"similar\") of every document to every other document in a given corpus, and storing the similarities for later use. The similarities are bucketed, and I don't care what the specific similarity is for purposes of my analysis, just what bucket it's in. For example, if documents 15378 and 3278 are 52% similar, the ordered pair (3278, 15378) gets stored in the [0.5,0.6) bucket. Documents sometimes get either added or removed from the corpus after initial analysis, so corresponding pairs get added to or removed from the buckets as needed.\nI'm looking at strategies for storing these lists of ID pairs. We found a SQL database (where most of our other data for this project lives) to be too slow and too large disk-space-wise for our purposes, so at the moment we store each bucket as a compressed list of integers on disk (originally zlib-compressed, but now using lz4 instead for speed). Things I like about this:\n\nReading and writing are both quite fast\nAfter-the-fact additions to the corpus are fairly straightforward to add (a bit less so for lz4 than for zlib because lz4 doesn't have a framing mechanism built in, but doable)\nAt both write and read time, data can be streamed so it doesn't need to be held in memory all at once, which would be prohibitive given the size of our corpora\n\nThings that kind of suck:\n\nDeletes are a huge pain, and basically involve streaming through all the buckets and writing out new ones that omit any pairs that contain the ID of a document that's been deleted\nI suspect I could still do better both in terms of speed and compactness with a more special-purpose data structure and\/or compression strategy\n\nSo: what kinds of data structures should I be looking at? I suspect that the right answer is some kind of exotic succinct data structure, but this isn't a space I know very well. Also, if it matters: all of the document IDs are unsigned 32-bit ints, and the current code that handles this data is written in C, as Python extensions, so that's probably the general technology family we'll stick with if possible.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":704,"Q_Id":16426469,"Users Score":1,"Answer":"Why not just store a table containing stuff that was deleted since the last re-write?\nThis table could be the same structure as your main bucket, maybe with a Bloom filter for quick membership checks.\nYou can re-write the main bucket data without the deleted items either when you were going to re-write it anyway for some other modification, or when the ratio of deleted items:bucket size exceeds some threshold.\n\nThis scheme could work either by storing each deleted pair alongside each bucket, or by storing a single table for all deleted documents: I'm not sure which is a better fit for your requirements.\nKeeping a single table, it's hard to know when you can remove an item unless you know how many buckets it affects, without just re-writing all buckets whenever the deletion table gets too large. This could work, but it's a bit stop-the-world.\nYou also have to do two checks for each pair you stream in (ie, for (3278, 15378), you'd check whether either 3278 or 15378 has been deleted, instead of just checking whether pair (3278, 15378) has been deleted.\nConversely, the per-bucket table of each deleted pair would take longer to build, but be slightly faster to check, and easier to collapse when re-writing the bucket.","Q_Score":6,"Tags":"python,c,data-structures,integer","A_Id":16444440,"CreationDate":"2013-05-07T18:54:00.000","Title":"Data structure options for efficiently storing sets of integer pairs on disk?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am trying to install opencv on my MacbookPro OSX 10.6.8 (snow leopard)\nand Xcode version is 3.2.6\nand result of \"which python\" is\n\nHong-Jun-Choiui-MacBook-Pro:~ teemo$ which python\n \/Library\/Frameworks\/Python.framework\/Versions\/2.7\/bin\/python\n\nand i am suffering from this below..\n\nLinking CXX shared library ..\/..\/lib\/libopencv_contrib.dylib\n[ 57%] Built target opencv_contrib\nmake: * [all] Error 2\n\nFull log is here link by \"brew install -v opencv\"\n 54 248 246 33:7700\/log.txt\nany advice for me?\ni just need opencv lib for python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":447,"Q_Id":16436260,"Users Score":0,"Answer":"Try using macports it builds opencv including python bindings without any issue. \nI have used this for osx 10.8.","Q_Score":0,"Tags":"python,macos,opencv","A_Id":16495361,"CreationDate":"2013-05-08T08:45:00.000","Title":"openCV install Error using brew","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My Question is as follows:\nI know a little bit about ML in Python (using NLTK), and it works ok so far. I can get predictions given certain features. But I want to know, is there a way, to display the best features to achieve a label? I mean the direct opposite of what I've been doing so far (put in all circumstances, and get a label for that)\nI try to make my question clear via an example:\nLet's say I have a database with Soccer games.\nThe Labels are e.g. 'Win', 'Loss', 'Draw'.\nThe Features are e.g. 'Windspeed', 'Rain or not', 'Daytime', 'Fouls committed' etc. \nNow I want to know: Under which circumstances will a Team achieve a Win, Loss or Draw? Basically I want to get back something like this:\nBest conditions for Win: Windspeed=0, No Rain, Afternoon, Fouls=0 etc\nBest conditions for Loss: ...\nIs there a way to achieve this?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":879,"Q_Id":16442055,"Users Score":0,"Answer":"You could compute the representativeness of each feature to separate the classes via feature weighting. The most common method for feature selection (and therefore feature weighting) in Text Classification is chi^2. This measure will tell you which features are better. Based on this information you can analyse the specific values that are best for every case. I hope this helps.\nRegards,","Q_Score":3,"Tags":"python,machine-learning,nltk","A_Id":16446014,"CreationDate":"2013-05-08T13:30:00.000","Title":"Machine Learning in Python - Get the best possible feature-combination for a label","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any built-in way to get scikit-learn to perform shuffled stratified k-fold cross-validation? This is one of the most common CV methods, and I am surprised I couldn't find a built-in method to do this.\nI saw that cross_validation.KFold() has a shuffling flag, but it is not stratified. Unfortunately cross_validation.StratifiedKFold() does not have such an option, and cross_validation.StratifiedShuffleSplit() does not produce disjoint folds.\nAm I missing something? Is this planned?\n(obviously I can implement this by myself)","AnswerCount":4,"Available Count":1,"Score":-0.1488850336,"is_accepted":false,"ViewCount":9847,"Q_Id":16448988,"Users Score":-3,"Answer":"As far as I know, this is actually implemented in scikit-learn. \n\n\"\"\"\nStratified ShuffleSplit cross validation iterator\nProvides train\/test indices to split data in train test sets.\nThis cross-validation object is a merge of StratifiedKFold and\nShuffleSplit, which returns stratified randomized folds. The folds\nare made by preserving the percentage of samples for each class.\nNote: like the ShuffleSplit strategy, stratified random splits\ndo not guarantee that all folds will be different, although this is\nstill very likely for sizeable datasets.\n\"\"\"","Q_Score":6,"Tags":"python,machine-learning,scikit-learn,cross-validation","A_Id":16950633,"CreationDate":"2013-05-08T19:46:00.000","Title":"Randomized stratified k-fold cross-validation in scikit-learn?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use the gaussian function in python to generate some numbers between a specific range giving the mean and variance \nso lets say I have a range between 0 and 10 \nand I want my mean to be 3 and variance to be 4\nmean = 3, variance = 4\nhow can I do that ?","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":36597,"Q_Id":16471763,"Users Score":1,"Answer":"If you have a small range of integers, you can create a list with a gaussian distribution of the numbers within that range and then make a random choice from it.","Q_Score":10,"Tags":"python,gaussian","A_Id":16471982,"CreationDate":"2013-05-09T21:51:00.000","Title":"Generating numbers with Gaussian function in a range using python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When using RandomForestRegressor from Sklearn, how do you get the residuals of the regression? I would like to plot out these residuals to check the linearity.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2682,"Q_Id":16502445,"Users Score":5,"Answer":"There is no function for that, as we like to keep the interface very simple.\nYou can just do\ny - rf.predict(X)","Q_Score":4,"Tags":"python,python-2.7,numpy,scipy,scikit-learn","A_Id":16524872,"CreationDate":"2013-05-11T22:51:00.000","Title":"Residuals of Random Forest Regression (Python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am fitting data points using a logistic model. As I sometimes have data with a ydata error, I first used curve_fit and its sigma argument to include my individual standard deviations in the fit. \nNow I switched to leastsq, because I needed also some Goodness of Fit estimation that curve_fit could not provide. Everything works well, but now I miss the possibility to weigh the least sqares as \"sigma\" does with curve_fit.\nHas someone some code example as to how I could weight the least squares also in leastsq?\nThanks, Woodpicker","AnswerCount":3,"Available Count":1,"Score":0.2605204458,"is_accepted":false,"ViewCount":4123,"Q_Id":16510227,"Users Score":4,"Answer":"Assuming your data are in arrays x, y with yerr, and the model is f(p, x), just define the error function to be minimized as (y-f(p,x))\/yerr.","Q_Score":6,"Tags":"python,scipy,curve-fitting,least-squares","A_Id":16521114,"CreationDate":"2013-05-12T17:36:00.000","Title":"Python \/ Scipy - implementing optimize.curve_fit 's sigma into optimize.leastsq","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the Scikit module for Python to implement Stochastic Gradient Boosting. My data set has 2700 instances and 1700 features (x) and contains binary data. My output vector is 'y', and contains 0 or 1 (binary classification). My code is,\n\ngb = GradientBoostingClassifier(n_estimators=1000,learn_rate=1,subsample=0.5)\ngb.fit(x,y)\nprint gb.score(x,y)\n\nOnce I ran it, and got an accuracy of 1.0 (100%), and sometimes I get an accuracy of around 0.46 (46%). Any idea why there is such a huge gap in its performance?","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":1888,"Q_Id":16579775,"Users Score":5,"Answer":"First, a couple of remarks:\n\nthe name of the algorithm is Gradient Boosting (Regression Trees or Machines) and is not directly related to Stochastic Gradient Descent\nyou should never evaluate the accuracy of a machine learning algorithm on you training data, otherwise you won't be able to detect the over-fitting of the model. Use: sklearn.cross_validation.train_test_split to split X and y into a X_train, y_train for fitting and X_test, y_test for scoring instead.\n\nNow to answer your question, GBRT models are indeed non deterministic models. To get deterministic \/ reproducible runs, you can pass random_state=0 to seed the pseudo random number generator (or alternatively pass max_features=None but this is not recommended). \nThe fact that you observe such big variations in your training error is weird though. Maybe your output signal if very correlated with a very small number of informative features and most other features are just noise?\nYou could try to fit a RandomForestClassifier model to your data and use the computed feature_importance_ array to discard noisy features and help stabilize your GBRT models.","Q_Score":0,"Tags":"python,machine-learning,scikit-learn,scikits","A_Id":16583343,"CreationDate":"2013-05-16T05:44:00.000","Title":"Stochastic Gradient Boosting giving unpredictable results","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just picked up Pandas to do with some data analysis work in my biology research. Turns out one of the proteins I'm analyzing is called 'NA'.\nI have a matrix with pairwise 'HA, M1, M2, NA, NP...' on the column headers, and the same as \"row headers\" (for the biologists who might read this, I'm working with influenza). \nWhen I import the data into Pandas directly from a CSV file, it reads the \"row headers\" as 'HA, M1, M2...' and then NA gets read as NaN. Is there any way to stop this? The column headers are fine - 'HA, M1, M2, NA, NP etc...'","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":11645,"Q_Id":16596188,"Users Score":5,"Answer":"Just ran into this issue--I specified a str converter for the column instead, so I could keep na elsewhere: \npd.read_csv(... , converters={ \"file name\": str, \"company name\": str})","Q_Score":15,"Tags":"python,pandas,bioinformatics","A_Id":42148893,"CreationDate":"2013-05-16T19:48:00.000","Title":"Pandas Convert 'NA' to NaN","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I'm trying to install pandas for Python 3.3 and have been having a really hard time- between Python 2.7 and Python 3.3 and other factors.\nSome pertinent information: I am running Mac OSX Lion 10.7.5. I have both Python 2.7 and Python 3.3 installed, but for my programming purposes only use 3.3.\nThis is where I'm at: I explicitly installed pip-3.3 and can now run that command to install things. I have XCode installed, and have also installed the command line tools (from 'Preferences'). I have looked through a number of pages through Google as well as through this site and haven't had any luck getting pandas to download\/download and install.\nI have tried downloading the tarball, 'cd' into the downloaded file and running setup.py install, but to no avail.\nI have downloaded and installed EPD Free, and then added 'Library\/Framework\/Python.framework\/Versions\/Current\/bin:${PATH} to .bash_profile - still doesn't work.\nI'm not sure where to go frome here...when I do pip-3.3 install pandas terminal relates that There was a problem confirming the ssl certificate: and so nothing ends up getting downloaded or installed, for either pandas, or I also tried to the same for numpy as I thought that could be a problem, but the same error was returned.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2260,"Q_Id":16599357,"Users Score":0,"Answer":"Thanks, I just had the same issue with Angstrom Linux on the BeagleBone Black board and the easy_install downgrade solution solved it. One thing I did need to do, is after installing easy_install using \nopkg install python-setuptools\nI then had to go into the easy_install file (located in \/usr\/bin\/easy_install)\nand change the top line from #!\/usr\/bin\/python-native\/python to #!\/usr\/bin\/python\nthis fixed easy_install so it would detect python on the BeagleBone and then I could run your solution.","Q_Score":3,"Tags":"python,pandas,pip","A_Id":21592812,"CreationDate":"2013-05-16T23:58:00.000","Title":"Python 3.3 pandas, pip-3.3","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In other words, each element of the outer array will be a row vector from the original 2D array.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5758,"Q_Id":16601049,"Users Score":0,"Answer":"I think it makes little sense to use numpy arrays to do that, just think you're missing out on all the advantages of numpy.","Q_Score":0,"Tags":"python,arrays,numpy,nested,vectorization","A_Id":16607943,"CreationDate":"2013-05-17T03:42:00.000","Title":"How do I convert a 2D numpy array into a 1D numpy array of 1D numpy arrays?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some sampled (univariate) data - but the clock driving the sampling process is inaccurate - resulting in a random slip of (less than) 1 sample every 30. A more accurate clock at approximately 1\/30 of the frequency provides reliable samples for the same data ... allowing me to establish a good estimate of the clock drift.\nI am looking to interpolate the sampled data to correct for this so that I 'fit' the high frequency data to the low-frequency. I need to do this 'real time' - with no more than the latency of a few low-frequency samples.\nI recognise that there is a wide range of interpolation algorithms - and, among those I've considered, a spline based approach looks most promising for this data.\nI'm working in Python - and have found the scipy.interpolate package - though I could see no obvious way to use it to 'stretch' n samples to correct a small timing error. Am I overlooking something?\nI am interested in pointers to either a suitable published algorithm, or - ideally - a Python library function to achieve this sort of transform. Is this supported by SciPy (or anything else)?\nUPDATE...\nI'm beginning to realise that what, at first, seemed a trivial problem isn't as straightforward as I first thought. I am no-longer convinced that naive use of splines will suffice. I've also realised that my problem can be better described without reference to 'clock drift'... like this:\nA single random variable is sampled at two different frequencies - one low and one high, with no common divisor - e.g. 5hz and 144hz. If we assume sample 0 is identical at both sample rates, sample 1 @5hz falls between samples 28 amd 29. I want to construct a new series - at 720hz, say - that fits all the known data points \"as smoothly as possible\".\nI had hoped to find an 'out of the box' solution.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":426,"Q_Id":16625298,"Users Score":1,"Answer":"Before you can ask the programming question, it seems to me you need to investigate a more fundamental scientific one.\nBefore you can start picking out particular equations to fit badfastclock to goodslowclock, you should investigate the nature of the drift. Let both clocks run a while, and look at their points together. Is badfastclock bad because it drifts linearly away from real time? If so, a simple quadratic equation should fit badfastclock to goodslowclock, just as a quadratic equation describes the linear acceleration of a object in gravity; i.e., if badfastclock is accelerating linearly away from real time, you can deterministically shift badfastclock toward real time. However, if you find that badfastclock is bad because it is jumping around, then smooth curves -- even complex smooth curves like splines -- won't fit. You must understand the data before trying to manipulate it.","Q_Score":2,"Tags":"python,scipy,numeric,numerical-methods","A_Id":16652325,"CreationDate":"2013-05-18T14:20:00.000","Title":"Interpolaton algorithm to correct a slight clock drift","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some sampled (univariate) data - but the clock driving the sampling process is inaccurate - resulting in a random slip of (less than) 1 sample every 30. A more accurate clock at approximately 1\/30 of the frequency provides reliable samples for the same data ... allowing me to establish a good estimate of the clock drift.\nI am looking to interpolate the sampled data to correct for this so that I 'fit' the high frequency data to the low-frequency. I need to do this 'real time' - with no more than the latency of a few low-frequency samples.\nI recognise that there is a wide range of interpolation algorithms - and, among those I've considered, a spline based approach looks most promising for this data.\nI'm working in Python - and have found the scipy.interpolate package - though I could see no obvious way to use it to 'stretch' n samples to correct a small timing error. Am I overlooking something?\nI am interested in pointers to either a suitable published algorithm, or - ideally - a Python library function to achieve this sort of transform. Is this supported by SciPy (or anything else)?\nUPDATE...\nI'm beginning to realise that what, at first, seemed a trivial problem isn't as straightforward as I first thought. I am no-longer convinced that naive use of splines will suffice. I've also realised that my problem can be better described without reference to 'clock drift'... like this:\nA single random variable is sampled at two different frequencies - one low and one high, with no common divisor - e.g. 5hz and 144hz. If we assume sample 0 is identical at both sample rates, sample 1 @5hz falls between samples 28 amd 29. I want to construct a new series - at 720hz, say - that fits all the known data points \"as smoothly as possible\".\nI had hoped to find an 'out of the box' solution.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":426,"Q_Id":16625298,"Users Score":0,"Answer":"Bsed on your updated question, if the data is smooth with time, just place all the samples in a time trace, and interpolate on the sparse grid (time).","Q_Score":2,"Tags":"python,scipy,numeric,numerical-methods","A_Id":16708058,"CreationDate":"2013-05-18T14:20:00.000","Title":"Interpolaton algorithm to correct a slight clock drift","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I reconized numpy can link with blas, and I thought of why not using gpu accelerated blas library.\nDid anyone use to do so?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2411,"Q_Id":16629529,"Users Score":0,"Answer":"If memory servers, pyCuda at least, probably also pyOpenCL can work with numPy","Q_Score":2,"Tags":"python,numpy,opencl,gpgpu","A_Id":18478963,"CreationDate":"2013-05-18T22:19:00.000","Title":"Can I link numpy with AMD's gpu accelerated blas library","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Which python data type should I use to create a huge 2d array (7Mx7M) with fast random access? I want to write each element once and read many times.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":390,"Q_Id":16656850,"Users Score":1,"Answer":"It would help to know more about your data, and what kind of access you need to provide. How fast is \"fast enough\" for you? Just to be clear, \"7M\" means 7,000,000 right?\nAs a quick answer without any of that information, I have had positive experiences working with redis and tokyo tyrant for fast read access to large amounts of data, either hundreds of megabytes or gigabytes.","Q_Score":0,"Tags":"python,arrays,2d,large-data","A_Id":16657039,"CreationDate":"2013-05-20T19:25:00.000","Title":"Which python data type should I use to create a huge 2d array (7Mx7M) with fast random access?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way?\nI am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out.","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":49996,"Q_Id":16661790,"Users Score":9,"Answer":"I think it is worth mentioning that plt.close() releases the memory, thus is preferred when generating and saving many figures in one run.\nUsing plt.clf() in such case will produce a warning after 20 plots (even if they are not going to be shown by plt.show()):\n\nMore than 20 figures have been opened. Figures created through the\n pyplot interface (matplotlib.pyplot.figure) are retained until\n explicitly closed and may consume too much memory.","Q_Score":32,"Tags":"python,matplotlib","A_Id":46957388,"CreationDate":"2013-05-21T03:47:00.000","Title":"Difference between plt.close() and plt.clf()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way?\nI am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out.","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":49996,"Q_Id":16661790,"Users Score":2,"Answer":"plt.clf() clears the entire current figure with all its axes, but leaves the window opened, such that it may be reused for other plots.\nplt.close() closes a window, which will be the current window, if not specified otherwise.","Q_Score":32,"Tags":"python,matplotlib","A_Id":44976331,"CreationDate":"2013-05-21T03:47:00.000","Title":"Difference between plt.close() and plt.clf()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way?\nI am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":49996,"Q_Id":16661790,"Users Score":41,"Answer":"plt.close() will close the figure window entirely, where plt.clf() will just clear the figure - you can still paint another plot onto it.\nIt sounds like, for your needs, you should be preferring plt.clf(), or better yet keep a handle on the line objects themselves (they are returned in lists by plot calls) and use .set_data on those in subsequent iterations.","Q_Score":32,"Tags":"python,matplotlib","A_Id":16661815,"CreationDate":"2013-05-21T03:47:00.000","Title":"Difference between plt.close() and plt.clf()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Matlab cell array, each of whose cells contains an N x M matrix. The value of M varies across cells.\nWhat would be an efficient way to represent this type of a structure in Python using numpy or any standard Python data-structure?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1299,"Q_Id":16689681,"Users Score":5,"Answer":"Have you considered a list of numpy.arrays?","Q_Score":1,"Tags":"python,matlab,data-structures,cell","A_Id":16689864,"CreationDate":"2013-05-22T10:41:00.000","Title":"Alternative to Matlab cell data-structure in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to decide about a Python computer vision library. I had used OpenCV in C++, and like it very much. However this time I need to develop my algorithm in Python. My short list has three libraries:\n1- OpenCV (Python wrapper)\n2- PIL (Python Image Processing Library)\n3- scikit-image\nWould you please help me to compare these libraries?\nI use numpy, scipy, scikit-learn in the rest of my code. The performance and ease of use is an important factor, also, portability is an important factor for me.\nThanks for your help","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2421,"Q_Id":16697391,"Users Score":7,"Answer":"I have worked mainly with OpenCV and also with scikit-image. I would say that while OpenCV is more focus on computer vision (classification, feature detection and extraction,...). However lately scikit-image is improving rapidly. \nI faced that some algorithms perform faster under OpenCV, however in most cases I find much more easier working with scikit-image, OpenCV documentations is quite cryptic.\nAs long as OpenCV 2.x bindings works with numpy as well as scikit-image I would take into account using both libraries, trying to take the better of each of them. At least is what I have done in my last project.","Q_Score":8,"Tags":"python,opencv,image-processing,computer-vision,scikit-learn","A_Id":16698292,"CreationDate":"2013-05-22T16:49:00.000","Title":"Comparing computer vision libraries in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better?\nThank you for your time","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1736,"Q_Id":16710374,"Users Score":0,"Answer":"When you want to find the shortest path you should use BFS and not DFS because BFS explores the closest nodes first so when you reach your goal you know for sure that you used the shortest path and you can stop searching. Whereas DFS explores one branch at a time so when you reach your goal you can't be sure that there is not another path via another branch that is shorter.\nSo you should use BFS.\nIf your graph have different weights on its edges, then you should use Dijkstra's algorithm which is an adaptation of BFS for weighted graphs, but don't use it if you don't have weights.\nSome people may recommend you to use Floyd-Warshall algorithm but it is a very bad idea for a graph this large.","Q_Score":0,"Tags":"python,algorithm,graph","A_Id":16710763,"CreationDate":"2013-05-23T09:35:00.000","Title":"Breadth First Search or Depth First Search?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better?\nThank you for your time","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1736,"Q_Id":16710374,"Users Score":0,"Answer":"If there are no weights for the edges on the graph, a simple Breadth-first search where you access nodes in the graph iteratively and check if any of the new nodes equals the destination-node can be done. If the edges have weights, DJikstra's algorithm and Bellman-Ford algoriths are things which you should be looking at, depending on your expected time and space complexities that you are looking at.","Q_Score":0,"Tags":"python,algorithm,graph","A_Id":16710738,"CreationDate":"2013-05-23T09:35:00.000","Title":"Breadth First Search or Depth First Search?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently doing this by using a sort of a greedy algorithm by iterating over the sets from largest to smallest set. What would be a good algorithm to choose if i'm more concerned about finding the best solution rather than efficiency?\nDetails:\n1) Each set has a predefined range\n2) My goal is to end up with a lot of densely packed sets rather than reducing the total number of sets.\nExample: Suppose the range is 8.\nThe sets might be: [1,5,7], [2,6], [3,4,5], [1,2] , [4], [1]\nA good result would be [1,5,7,2,6,4], [3,4,5,1,2], [1]","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":124,"Q_Id":16731960,"Users Score":0,"Answer":"I am not sure whether it will give an optimal solution, but would simply repeatedly merging the two largest non-overlapping sets not work?","Q_Score":0,"Tags":"python,algorithm","A_Id":16735119,"CreationDate":"2013-05-24T09:41:00.000","Title":"What is a good way to merge non intersecting sets in a list to end up with denseley packed sets?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Has anyone seen this error in celery (a distribute task worker in Python) before?\nTraceback (most recent call last):\n File \"\/home\/mcapp\/.virtualenv\/lister\/local\/lib\/python2.7\/site-packages\/celery\/task\/trace.py\", line 228, in trace_task\n R = retval = fun(*args, **kwargs)\n File \"\/home\/mcapp\/.virtualenv\/lister\/local\/lib\/python2.7\/site-packages\/celery\/task\/trace.py\", line 415, in protected_call\n return self.run(*args, **kwargs)\n File \"\/home\/mcapp\/lister\/lister\/tasks\/init.py\", line 69, in update_playlist_db\n video_update(videos)\n File \"\/home\/mcapp\/lister\/lister\/tasks\/init.py\", line 55, in video_update\n chord(tasks)(update_complete.s(update_id=update_id, update_type='db', complete=True))\n File \"\/home\/mcapp\/.virtualenv\/lister\/local\/lib\/python2.7\/site-packages\/celery\/canvas.py\", line 464, in call\n _chord = self.type\n File \"\/home\/mcapp\/.virtualenv\/lister\/local\/lib\/python2.7\/site-packages\/celery\/canvas.py\", line 461, in type\n return self._type or self.tasks[0].type.app.tasks['celery.chord']\nIndexError: list index out of range\nThis particular version of celery is 3.0.19, and happens when the celery chord feature is used. We don't think there is any error in our application, as 99% of the time our code works correctly, but under heavier loads this error would happen. We are trying to find out if this is an actual bug in our application or a celery bug, any help would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":859,"Q_Id":16745487,"Users Score":1,"Answer":"This is an error that occurs when a chord header has no tasks in it. Celery tries to access the tasks in the header using self.tasks[0] which results in an index error since there are no tasks in the list.","Q_Score":2,"Tags":"python,runtime-error,celery","A_Id":18938559,"CreationDate":"2013-05-25T01:12:00.000","Title":"celery.chord gives IndexError: list index out of range error in celery version 3.0.19","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Suppose I have a graph of N nodes, and each pair of nodes have a weight associated with them. I would like to split this graph into n smaller graphs to reduce the overall weight.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":161,"Q_Id":16751995,"Users Score":0,"Answer":"Remove the k-1 edges with the highest weights.","Q_Score":2,"Tags":"python,algorithm","A_Id":16752052,"CreationDate":"2013-05-25T17:13:00.000","Title":"Split a weighted graph into n graphs to minimize the sum of weights in each graph","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have a graph of N nodes, and each pair of nodes have a weight associated with them. I would like to split this graph into n smaller graphs to reduce the overall weight.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":161,"Q_Id":16751995,"Users Score":1,"Answer":"What you are searching for is called weighted max-cut.","Q_Score":2,"Tags":"python,algorithm","A_Id":16752049,"CreationDate":"2013-05-25T17:13:00.000","Title":"Split a weighted graph into n graphs to minimize the sum of weights in each graph","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I am trying to read a csv file into a double list which is not the problem atm.\nWhat I am trying to do is just print all the sL values between two lines. I.e i want to print sL [200] to sl [300] but i dont want to manually have to type print sL for all values between these two numbers is there a code that can be written to print all values between these two lines that would be the same as typing sL out individually all the way from 200 to 300","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":16766587,"Users Score":0,"Answer":"sed -n 200,300p, perhaps, for 200 to 300 inclusive; adjust the numbers by \u00b11 if exclusive or whatever?","Q_Score":0,"Tags":"python,list,printing,lines","A_Id":16766609,"CreationDate":"2013-05-27T05:05:00.000","Title":"Reading certain lines of a string","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I am trying to read a csv file into a double list which is not the problem atm.\nWhat I am trying to do is just print all the sL values between two lines. I.e i want to print sL [200] to sl [300] but i dont want to manually have to type print sL for all values between these two numbers is there a code that can be written to print all values between these two lines that would be the same as typing sL out individually all the way from 200 to 300","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":16766587,"Users Score":0,"Answer":"If it is a specific column ranging between 200 and 300, use filter() function.\nnew_array = filter(lambda x: x['column'] >= 200 or z['column'] <= 300, sl)","Q_Score":0,"Tags":"python,list,printing,lines","A_Id":16766726,"CreationDate":"2013-05-27T05:05:00.000","Title":"Reading certain lines of a string","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Today I upgraded to Xubuntu 13.04 which comes with Python 3.3. Before that, I was working with Pyton 3.2, which was working perfectly fine.\nWhen running my script under Python 3.3, I get an\n\nImportError: No module named 'pylab'\n\nin import pylab.\nRunning in Python 3.2, which I reinstalled, throws\n\nImportError: cannot import name multiarray\n\nin import numpy.\nScipy, numpy and matplotlib are, recording to apt, on the newest version.\nI don't have much knowledge about this stuff. Do you have any recommendations on how to get my script to work again, preferably on Python 3.2?\nThanks in advance,\nKatrin\nEdit:\nWe solved the problem: Apparently, there where a lot of fragments \/ pieces of the packages in different paths, as I installed from apt, pip as well as manually. After deleting all packages and installing them only via pip, everything works fine. Thank you very much for the help!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":312,"Q_Id":16820903,"Users Score":1,"Answer":"I suspect you need to install python3-matplotlib, python3-numpy, etc. python-matlab is the python2 version.","Q_Score":1,"Tags":"python,matplotlib,python-3.3","A_Id":16823062,"CreationDate":"2013-05-29T18:04:00.000","Title":"Pylab after upgrading","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Pandas data frame, one of the column contains date strings in the format YYYY-MM-DD\nFor e.g. '2013-10-28'\nAt the moment the dtype of the column is object.\nHow do I convert the column values to Pandas date format?","AnswerCount":10,"Available Count":2,"Score":-0.0199973338,"is_accepted":false,"ViewCount":297830,"Q_Id":16852911,"Users Score":-1,"Answer":"Try to convert one of the rows into timestamp using the pd.to_datetime function and then use .map to map the formular to the entire column","Q_Score":133,"Tags":"python,date,pandas","A_Id":60213042,"CreationDate":"2013-05-31T08:23:00.000","Title":"How do I convert strings in a Pandas data frame to a 'date' data type?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Pandas data frame, one of the column contains date strings in the format YYYY-MM-DD\nFor e.g. '2013-10-28'\nAt the moment the dtype of the column is object.\nHow do I convert the column values to Pandas date format?","AnswerCount":10,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":297830,"Q_Id":16852911,"Users Score":25,"Answer":"Now you can do df['column'].dt.date\nNote that for datetime objects, if you don't see the hour when they're all 00:00:00, that's not pandas. That's iPython notebook trying to make things look pretty.","Q_Score":133,"Tags":"python,date,pandas","A_Id":33577649,"CreationDate":"2013-05-31T08:23:00.000","Title":"How do I convert strings in a Pandas data frame to a 'date' data type?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Trying to use gridsearch CV with multiple jobs via the n_jobs argument and I can see using htop that they're all launched and running but they all get stuck\/assigned on the same core using 25% each (I'm using a 4 core machine). I'm on Ubuntu 12.10 and I'm running the latest master pulled from github. Anyone know how to fix this?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":391,"Q_Id":16877448,"Users Score":2,"Answer":"Found it, there's a note on the svm page that if you enable verbose settings multiprocessing may break. Disabling verbose fixed it","Q_Score":1,"Tags":"python,scikit-learn","A_Id":16877799,"CreationDate":"2013-06-01T21:29:00.000","Title":"Scikit-Learn n_jobs Multiprocessing Locked To One Core","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to save some arrays as TIFF with matplotlib, but I'm getting 24 bit RGB files instead with plt.imsave().\nCan I change that without resorting to the PIL? It's quite important for me to keep everything in pure matplotlib.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":792,"Q_Id":16893102,"Users Score":2,"Answer":"Using matplotlib to export to TIFF will use PIL anyway. As far as I know, matplotlib has native support only for PNG, and uses PIL to convert to other file formats. So when you are using matplotlib to export to TIFF, you can use PIL immediately.","Q_Score":0,"Tags":"python,matplotlib,tiff","A_Id":16893364,"CreationDate":"2013-06-03T08:54:00.000","Title":"How to save 32\/64 bit grayscale floats to TIFF with matplotlib?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to delete a group (by group name) from a groupby object in pandas? That is, after performing a groupby, delete a resulting group based on its name.","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":27093,"Q_Id":16910114,"Users Score":2,"Answer":"it is so simple, you need to use the filter function and lambda exp:\ndf_filterd=df.groupby('name').filter(lambda x:(x.name == 'cond1' or...(other condtions )))\nyou need to take care that if you want to use more than condtion to put it in brackets()..\nand you will get back a DataFrame not GroupObject.","Q_Score":22,"Tags":"python,pandas","A_Id":69644920,"CreationDate":"2013-06-04T05:00:00.000","Title":"Delete a group after pandas groupby","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to delete a group (by group name) from a groupby object in pandas? That is, after performing a groupby, delete a resulting group based on its name.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":27093,"Q_Id":16910114,"Users Score":0,"Answer":"Should be easy: \ndf.drop(index='group_name',inplace=True)","Q_Score":22,"Tags":"python,pandas","A_Id":58868271,"CreationDate":"2013-06-04T05:00:00.000","Title":"Delete a group after pandas groupby","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large dataset with 500 million rows and 58 variables. I need to sort the dataset using one of the 59th variable which is calculated using the other 58 variables. The variable happens to be a floating point number with four places after decimal.\nThere are two possible approaches:\n\nThe normal merge sort\nWhile calculating the 59th variables, i start sending variables in particular ranges to to particular nodes. Sort the ranges in those nodes and then combine them in the reducer once i have perfectly sorted data and now I also know where to merge what set of data; It basically becomes appending.\n\nWhich is a better approach and why?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":889,"Q_Id":16925802,"Users Score":0,"Answer":"I'll assume that you are looking for a total sort order without a secondary sort for all your rows. I should also mention that 'better' is never a good question since there is typically a trade-off between time and space and in Hadoop we tend to think in terms of space rather than time unless you use products that are optimized for time (TeraData has the capability of putting Databases in memory for Hadoop use)\nOut of the two possible approaches you mention, I think only one would work within the Hadoop infrastructure. Num 2, Since Hadoop leverages many nodes to do one job, sorting becomes a little trickier to implement and we typically want the 'shuffle and sort' phase of MR to take care of the sorting since distributed sorting is at the heart of the programming model.\nAt the point when the 59th variable is generated, you would want to sample the distribution of that variable so that you can send it through the framework then merge like you mentioned. Consider the case when the variable distribution of x contain 80% of your values. What this might do is send 80% of your data to one reducer who would do most of the work. This assumes of course that some keys will be grouped in the sort and shuffle phase which would be the case unless you programmed them unique. It's up to the programmer to set up partitioners to evenly distribute the load by sampling the key distribution.\nIf on the other hand we were to sort in memory then we could accomplish the same thing during reduce but there are inherent scalability issues since the sort is only as good as the amount of memory available in the node currently running the sort and dies off quickly when it starts to use HDFS to look for the rest of the data that did not fit into memory. And if you ignored the sampling issue you will likely run out of memory unless all your key values pairs are evenly distributed and you understand the memory capacity within your data.","Q_Score":0,"Tags":"python,sorting,hadoop,bigdata,hadoop-streaming","A_Id":16927877,"CreationDate":"2013-06-04T19:19:00.000","Title":"Sorting using Map-Reduce - Possible approach","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Entering arbitrary sized matrices to manipulate them using different operations.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":51,"Q_Id":16969190,"Users Score":1,"Answer":"Read the input till Ctrl+d, split by newline symbols first and then split the results by spaces.","Q_Score":0,"Tags":"python,matrix","A_Id":16969259,"CreationDate":"2013-06-06T18:12:00.000","Title":"I am working on a project where you have to input a matrix that has an arbitrary size. I am using python. Suggestions?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Entering arbitrary sized matrices to manipulate them using different operations.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":51,"Q_Id":16969190,"Users Score":1,"Answer":"Think about who is using this programme, and how, then develop an interface which meets those needs.","Q_Score":0,"Tags":"python,matrix","A_Id":16971642,"CreationDate":"2013-06-06T18:12:00.000","Title":"I am working on a project where you have to input a matrix that has an arbitrary size. I am using python. Suggestions?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed Enthought Canopy 32 - bit which comes with python 2.7 32 bit . And I downloaded windows installer scikit-learn-0.13.1.win32-py2.7 .. My machine is 64 bit. I could'nt find 64 bit scikit learn installer for intel processor, only AMD is available.\nPython 2.7 required which was not found in the registry is the error message I get when I try to run the installer. How do I solve this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":494,"Q_Id":16981708,"Users Score":2,"Answer":"Enthought Canopy 1.0.1 does not register the user's Python installation as the main one for the system. This has been fixed and will work in the upcoming release.","Q_Score":0,"Tags":"python,python-2.7,scikit-learn,enthought,epd-python","A_Id":16982178,"CreationDate":"2013-06-07T10:15:00.000","Title":"Scikit-Learn windows Installation error : python 2.7 required which was not found in the registry","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"is there any best practice for adding a \"random\" sort criteria to the old style collection in Plone?\nMy versions:\n\nPlone 4.3 (4305)\nCMF 2.2.7\nZope 2.13.19","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":17001402,"Users Score":0,"Answer":"There is no random sort criteria. Any randomness will need to be done in custom application code.","Q_Score":0,"Tags":"python,collections,plone,zope","A_Id":17032461,"CreationDate":"2013-06-08T16:10:00.000","Title":"New sort criteria \"random\" for Plone 4 old style collections","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Scikit-learn , K-Means have n_jobs but MiniBatch K-Means is lacking it.\nMBK is faster than KMeans but at large sample sets we would like it distribute the processing across multiprocessing (or other parallel processing libraries).\nIs MKB's Partial-fit the answer?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":1981,"Q_Id":17053548,"Users Score":3,"Answer":"I don't think this is possible. You could implement something with OpenMP inside the minibatch processing. I'm not aware of any parallel minibatch k-means procedures. Parallizing stochastic gradient descent procedures is somewhat hairy.\nBtw, the n_jobs parameter in KMeans only distributes the different random initializations afaik.","Q_Score":6,"Tags":"python,machine-learning,multiprocessing,scikit-learn","A_Id":17070022,"CreationDate":"2013-06-11T20:48:00.000","Title":"How can i distribute processing of minibatch kmeans (scikit-learn)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all.\nI am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":25964,"Q_Id":17053671,"Users Score":40,"Answer":"Set the MKL_NUM_THREADS environment variable to 1. As you might have guessed, this environment variable controls the behavior of the Math Kernel Library which is included as part of Enthought's numpy build.\nI just do this in my startup file, .bash_profile, with export MKL_NUM_THREADS=1. You should also be able to do it from inside your script to have it be process specific.","Q_Score":63,"Tags":"python,multithreading,numpy","A_Id":17054932,"CreationDate":"2013-06-11T20:56:00.000","Title":"How do you stop numpy from multithreading?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all.\nI am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.","AnswerCount":5,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":25964,"Q_Id":17053671,"Users Score":12,"Answer":"In more recent versions of numpy I have found it necessary to also set NUMEXPR_NUM_THREADS=1.\nIn my hands, this is sufficient without setting MKL_NUM_THREADS=1, but under some circumstances you may need to set both.","Q_Score":63,"Tags":"python,multithreading,numpy","A_Id":21673595,"CreationDate":"2013-06-11T20:56:00.000","Title":"How do you stop numpy from multithreading?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all.\nI am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.","AnswerCount":5,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":25964,"Q_Id":17053671,"Users Score":52,"Answer":"Only hopefully this fixes all scenarios and system you may be on. \n\nUse numpy.__config__.show() to see if you are using OpenBLAS or MKL\n\nFrom this point on there are a few ways you can do this.\n2.1. The terminal route export OPENBLAS_NUM_THREADS=1 or export MKL_NUM_THREADS=1\n2.2 (This is my preferred way) In your python script import os and add the line os.environ['OPENBLAS_NUM_THREADS'] = '1' or os.environ['MKL_NUM_THREADS'] = '1'.\nNOTE when setting os.environ[VAR] the number of threads must be a string! Also, you may need to set this environment variable before importing numpy\/scipy.\nThere are probably other options besides openBLAS or MKL but step 1 will help you figure that out.","Q_Score":63,"Tags":"python,multithreading,numpy","A_Id":48665619,"CreationDate":"2013-06-11T20:56:00.000","Title":"How do you stop numpy from multithreading?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Eprime outputs a .txt file like this: \n*** Header Start ***\nVersionPersist: 1\nLevelName: Session\nSubject: 7\nSession: 1\nRandomSeed: -1983293234\nGroup: 1\nDisplay.RefreshRate: 59.654\n*** Header End ***\n Level: 2\n *** LogFrame Start ***\n MeansEffectBias: 7\n Procedure: trialProc\n itemID: 7\n bias1Answer: 1\n *** LogFrame End ***\n Level: 2\n *** LogFrame Start ***\n MeansEffectBias: 2\n Procedure: trialProc\n itemID: 2\n bias1Answer: 0\n\n\nI want to parse this and write it to a .csv file but with a number of lines deleted.\nI tried to create a dictionary that took the text appearing before the colon as the key and\nthe text after as the value: \n {subject: [7, 7], bias1Answer : [1, 0], itemID: [7, 2]} \n\ndef load_data(filename):\n data = {}\n eprime = open(filename, 'r')\n for line in eprime:\n rows = re.sub('\\s+', ' ', line).strip().split(':')\n try:\n data[rows[0]] += rows[1]\n except KeyError:\n data[rows[0]] = rows[1]\n eprime.close()\n return data\n\n\nfor line in open(fileName, 'r'):\n if ':' in line:\n row = line.strip().split(':')\n fullDict[row[0]] = row[1]\nprint fullDict\n\nboth of the scripts below produce garbage:\n\n{'\\x00\\t\\x00M\\x00e\\x00a\\x00n\\x00s\\x00E\\x00f\\x00f\\x00e\\x00c\\x00t\\x00B\\x00i\\x00a\\x00s\\x00': '\\x00 \\x005\\x00\\r\\x00', '\\x00\\t\\x00B\\x00i\\x00a\\x00s\\x002\\x00Q\\x00.\\x00D\\x00u\\x00r\\x00a\\x00t\\x00i\\x00o\\x00n\\x00E\\x00r\\x00r\\x00o\\x00r\\x00': '\\x00 \\x00-\\x009\\x009\\x009\\x009\\x009\\x009\\x00\\r\\x00'\n\nIf I could set up the dictionary, I can write it to a csv file that would look like this!!:\n Subject itemID ... bias1Answer \n 7 7 1\n 7 2 0","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1142,"Q_Id":17056818,"Users Score":0,"Answer":"I know this is an older question so maybe you have long since solved it but I think you are approaching this in a more complex way than is needed. I figure I'll respond in case someone else has the same problem and finds this. \nIf you are doing things this way because you do not have a software key, it might help to know that the E-Merge and E-DataAid programs for eprime don't require a key. You only need the key for editing build files. Whoever provided you with the .txt files should probably have an install disk for these programs. If not, it is available on the PST website (I believe you need a serial code to create an account, but not certain)\nEprime generally creates a .edat file that matches the content of the text file you have posted an example of. Sometimes though if eprime crashes you don't get the edat file and only have the .txt. Luckily you can generate the edat file from the .txt file. \nHere's how I would approach this issue:\n\nIf you do not have the edat files available first use E-DataAid to recover the files.\nThen presuming you have multiple participants you can use E-Merge to merge all of the edat files together for all participants in who completed this task.\nOpen the merged file. It might look a little chaotic depending on how much you have in the file. You can got to Go to tools->Arrange columns. This will show a list of all your variables. \nAdjust so that only the desired variables are in the right hand box. Hit ok. \nThen you should have something resembling your end goal which can be exported as a csv.\n\nIf you have many procedures in the program you might at this point have lines that just have startup info and NULL in the locations where your variables or interest are. You can fix this by going to tools->filter and creating a filter to eliminate those lines.","Q_Score":7,"Tags":"python,csv,file-io","A_Id":21414260,"CreationDate":"2013-06-12T02:33:00.000","Title":"Parsing a txt file into a dictionary to write to csv file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was looking at the option of embedding python into fortran90 to add python functionality to my existing fortran90 code. I know that it can be done the other way around by extending python with fortran90 using the f2py from numpy. But, i want to keep my super optimized main loop in fortran and add python to do some additional tasks \/ evaluate further developments before I can do it in fortran, and also to ease up code maintenance. I am looking for answers for the following questions:\n1) Is there a library that already exists from which I can embed python into fortran? (I am aware of f2py and it does it the other way around)\n2) How do we take care of data transfer from fortran to python and back?\n3) How can we have a call back functionality implemented? (Let me describe the scenario a bit....I have my main_fortran program in Fortran, that call Func1_Python module in python. Now, from this Func1_Python, I want to call another function...say Func2_Fortran in fortran)\n4) What would be the impact of embedding the interpreter of python inside fortran in terms of performance....like loading time, running time, sending data (a large array in double precision) across etc.\nThanks a lot in advance for your help!!\nEdit1: I want to set the direction of the discussion right by adding some more information about the work I am doing. I am into scientific computing stuff. So, I would be working a lot on huge arrays \/ matrices in double precision and doing floating point operations. So, there are very few options other than fortran really to do the work for me. The reason i want to include python into my code is that I can use NumPy for doing some basic computations if necessary and extend the capabilities of the code with minimal effort. For example, I can use several libraries available to link between python and some other package (say OpenFoam using PyFoam library).","AnswerCount":7,"Available Count":1,"Score":0.1137907297,"is_accepted":false,"ViewCount":9062,"Q_Id":17075418,"Users Score":4,"Answer":"There is a very easy way to do this using f2py. Write your python method and add it as an input to your Fortran subroutine. Declare it in both the cf2py hook and the type declaration as EXTERNAL and also as its return value type, e.g. REAL*8. Your Fortran code will then have a pointer to the address where the python method is stored. It will be SLOW AS MOLASSES, but for testing out algorithms it can be useful. I do this often (I port a lot of ancient spaghetti Fortran to python modules...) It's also a great way to use things like optimised Scipy calls in legacy fortran","Q_Score":8,"Tags":"python,fortran,embed","A_Id":23725918,"CreationDate":"2013-06-12T21:09:00.000","Title":"Embed python into fortran 90","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Right now I'm importing a fairly large CSV as a dataframe every time I run the script. Is there a good solution for keeping that dataframe constantly available in between runs so I don't have to spend all that time waiting for the script to run?","AnswerCount":13,"Available Count":1,"Score":0.0307595242,"is_accepted":false,"ViewCount":432843,"Q_Id":17098654,"Users Score":2,"Answer":"Another quite fresh test with to_pickle().\nI have 25 .csv files in total to process and the final dataframe consists of roughly 2M items.\n(Note: Besides loading the .csv files, I also manipulate some data and extend the data frame by new columns.)\nGoing through all 25 .csv files and create the dataframe takes around 14 sec.\nLoading the whole dataframe from a pkl file takes less than 1 sec","Q_Score":415,"Tags":"python,pandas,dataframe","A_Id":63390537,"CreationDate":"2013-06-13T23:05:00.000","Title":"How to reversibly store and load a Pandas dataframe to\/from disk","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My project currently uses NumPy, only for memory-efficient arrays (of bool_, uint8, uint16, uint32).\nI'd like to get it running on PyPy which doesn't support NumPy. (failed to install it, at any rate)\nSo I'm wondering: Is there any other memory-efficient way to store arrays of numbers in Python? Anything that is supported by PyPy? Does PyPy have anything of it's own?\nNote: array.array is not a viable solution, as it uses a lot more memory than NumPy in my testing.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":919,"Q_Id":17099850,"Users Score":3,"Answer":"array.array is a memory efficient array. It packs bytes\/words etc together, so there is only a few bytes of extra overhead for the entire array.\nThe one place where numpy can use less memory is when you have a sparse array (and are using one of the sparse array implementations)\nIf you are not using sparse arrays, you simply measured it wrong.\narray.array also doesn't have a packed bool type, so you can implement that as wrapper around an array.array('I') or a bytearray() or even just use bit masks with a Python long","Q_Score":3,"Tags":"python,arrays,numpy,pypy","A_Id":17101084,"CreationDate":"2013-06-14T01:39:00.000","Title":"PyPy and efficient arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been researching RBMs for a couple months, using Python along the way, and have read all your papers. I am having a problem, and I thought, what the hey? Why not go to the source? I thought I would at least take the chance you may have time to reply.\nMy question is regarding the Log-Likelihood in a Restricted Boltzmann Machine. I have read that finding the exact log-likelihood in all but very small models is intractable, hence the introduction of contrastive divergence, PCD, pseudo log-likelihood etc. My question is, how do you find the exact log-likelihood in even a small model? \nI have come across several definitions of this formula, and all seem to be different. In Tielemen\u2019s 2008 paper \u201cTraining Restricted Boltzmann Machines using Approximations To the Likelihood Gradient\u201d, he performs a log-likelihood version of the test to compare to the other types of approximations, but does not say the formula he used. The closest thing I can find is the probabilities using the energy function over the partition function, but I have not been able to code this, as I don\u2019t completely understand the syntax.\nIn Bengio et al \u201cRepresentation Learning: A Review and New Perspectives\u201d, the equation for the log-likelihood is:\n sum_t=1 to T (log P(X^T, theta)) \nwhich is equal to sum_t=1 to T(log * sum_h in {0,1}^d_h(P(x^(t), h; theta))\n where T is training examples. This is (14) on page 11. \nThe only problem is that none of the other variables are defined. I assume x is the training data instance, but what is the superscript (t)? I also assume theta are the latent variables h, W, v\u2026 But how do you translate this into code?\nI guess what I\u2019m asking is can you give me a code (Python, pseudo-code, or any language) algorithm for finding the log-likelihood of a given model so I can understand what the variables stand for? That way, in simple cases, I can find the exact log-likelihood and then compare them to my approximations to see how well my approximations really are.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1642,"Q_Id":17113613,"Users Score":0,"Answer":"Assume you have v visible units, and h hidden units, and v < h. The key idea is that once you've fixed all the values for each visible unit, the hidden units are independent. \nSo you loop through all 2^v subsets of visible unit activations. Then computing the likelihood for the RBM with this particular activated visible subset is tractable, because the hidden units are independent[1]. So then loop through each hidden unit, and add up the probability of it being on and off conditioned on your subset of visible units. Then multiply out all of those summed on\/off hidden probabilities to get the probability that particular subset of visible units. Add up all subsets and you are done.\nThe problem is that this is exponential in v. If v > h, just \"transpose\" your RBM, pretending the hidden are visible and vice versa.\n[1] The hidden units can't influence each other, because you influence would have to go through the visible units (no h to h connections), but you've fixed the visible units.","Q_Score":0,"Tags":"python,machine-learning,artificial-intelligence,neural-network","A_Id":17117416,"CreationDate":"2013-06-14T16:55:00.000","Title":"Finding log-likelihood in a restricted boltzmann machine","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a text data labelled into 3 classes and class 1 has 1% data, class 2 - 69% and class 3 - 30%. Total data size is 10000. I am using 10-fold cross validation. For classification, SVM of scikit learn python library is used with class_weight=auto. But the code for 1 step of 10-fold CV has been running for 2 hrs and has not finished. This implies that for code will take at least 20 hours for completion. Without adding the class_weight=auto, it finishes in 10-15min. But then, no data is labelled of class 1 in the output. Is there some way to achieve solve this issue ?","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":2914,"Q_Id":17125247,"Users Score":4,"Answer":"First, for text data you don't need a non linear kernel, so you should use an efficient linear SVM solver such as LinearSVC or PassiveAggressiveClassifier instead.\nThe SMO algorithm of SVC \/ libsvm is not scalable: the complexity is more than quadratic which is practice often makes it useless for dataset larger than 5000 samples.\nAlso to deal with the class imbalance you might want to try to subsample the class 2 and class 3 to have a number of samples maximum twice the number of samples of class 1.","Q_Score":2,"Tags":"python,python-2.7,machine-learning,svm,scikit-learn","A_Id":17133162,"CreationDate":"2013-06-15T15:35:00.000","Title":"SVM Multiclass Classification using Scikit Learn - Code not completing","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I think I have a relatively simply question but am not able to locate an appropriate answer to solve the coding problem.\nI have a pandas column of string:\ndf1['tweet'].head(1)\n0 besides food,\nName: tweet\nI need to extract the text and push it into a Python str object, of this format:\ntest_messages = [\"line1\",\n \"line2\",\n \"etc\"]\nThe goal is to classify a test set of tweets and therefore believe the input to: X_test = tfidf.transform(test_messages) is a str object.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":8156,"Q_Id":17125248,"Users Score":1,"Answer":"Get the Series head(), then access the first value:\ndf1['tweet'].head(1).item()\nor: Use the Series tolist() method, then slice the 0'th element:\ndf.height.tolist()\n[94, 170]\ndf.height.tolist()[0]\n94\n\n(Note that Python indexing is 0-based, but head() is 1-based)","Q_Score":2,"Tags":"python,string,pandas,series","A_Id":52143806,"CreationDate":"2013-06-15T15:35:00.000","Title":"Get first element of Pandas Series of string","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to export dataframe to .xls file using to_excel() method. But while execution it was throwing an error: \"UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 892: ordinal not in range(128)\". Just few moments back it was working fine. \nThe code I used is: \n :csv2.to_excel(\"C:\\\\Users\\\\shruthi.sundaresan\\\\Desktop\\\\csat1.xls\",sheet_name='SAC_STORE_DATA',index=False).\ncsv2 is the dataframe. Why does this kind of error happens and how to avoid this is in the future?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":474,"Q_Id":17140080,"Users Score":0,"Answer":"The problem you are facing is that your excel has a character that cannot be decoded to unicode. It was probably working before but maybe you edited this xls file somehow in Excel\/Libre. You just need to find this character and either get rid of it or replace it with the one that is acceptable.","Q_Score":0,"Tags":"python-2.7,pandas,xls","A_Id":17141432,"CreationDate":"2013-06-17T03:36:00.000","Title":"Unable to export pandas dataframe into excel file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pyplot polar scatter plot with signed values. Pyplot does the \"right\" thing and creates only a positive axis, then reflects negative values to look as if they are a positive value 180 degrees away.\nBut, by default, pyplot plots all points using the same color. So positive and negative values are indistinguishable.\nI'd like to easily tell positive values at angle x from negative values at angle (x +\/- 180), with positive values red and negative values blue.\nI've made no progress creating what should be a very simple color map for this situation.\nHelp?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":785,"Q_Id":17154006,"Users Score":2,"Answer":"I'm not sure if this is the \"proper\" way to do this, but you could programmatically split your data into two subsets: one containing the positive values and the second containing the negative values. Then you can call the plot function twice, specifying the color you want for each subset.\nIt's not an elegant solution, but a solution nonetheless.","Q_Score":1,"Tags":"python,colors,matplotlib,scatter","A_Id":17154439,"CreationDate":"2013-06-17T18:11:00.000","Title":"Pyplot polar scatter plot color for sign","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If scipy.weave.inline is called inside a massive parallel MPI-enabled application that is run on a cluster with a home-directory that is common to all nodes, every instance accesses the same catalog for compiled code: $HOME\/.pythonxx_compiled. This is bad for obvious reasons and leads to many error messages. How can this problem be circumvented?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":282,"Q_Id":17154381,"Users Score":0,"Answer":"One quick workaround is to use a local directory on each node (e.g. \/tmp as Wesley said), but use one MPI task per node, if you have the capacity.","Q_Score":3,"Tags":"python,scipy,cluster-computing,mpi","A_Id":20356186,"CreationDate":"2013-06-17T18:34:00.000","Title":"How can scipy.weave.inline be used in a MPI-enabled application on a cluster?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a sequence of actions taking place on a video like, say \"zooming in and zooming out\" a webpage.\nI want to catch the frames that had a visual change from a some previous frame and so on.\nBasically, want to catch the visual difference happening in the video.\nI have tried using feature detection using SURF. It just detects random frames and does not detect most of the times.\nI have also tried, histograms and it does not help.\nAny directions and pointers?\nThanks in advance,","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":371,"Q_Id":17180409,"Users Score":2,"Answer":"For effects like zooming in and out, optical flow seems the best choice. Search for research papers on \"Shot Detection\" for other possible approaches.\nAs for the techniques you mention, did you apply some form of noise reduction before using them?","Q_Score":2,"Tags":"python,opencv,image-processing,frame,python-imaging-library","A_Id":17181542,"CreationDate":"2013-06-18T23:01:00.000","Title":"change detection on video frames","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i need to find out contact angle between 2 edges in an image using open cv and python so can anybody suggest me how to find it? if not code please let me know algorithm for the same.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":531,"Q_Id":17209762,"Users Score":0,"Answer":"Simplify edge A and B into a line equation (using only the few last pixels)\nGet the line equations of the two lines (form y = mx + b)\nGet the angle orientations of the two lines \u03b8=atan|1\/m|\nSubtract the two angles from each other\n\nMake sure to do the special case of infinite slope, and also do some simple math to get Final_Angle = (0;pi)","Q_Score":1,"Tags":"opencv,python-2.7","A_Id":17212232,"CreationDate":"2013-06-20T09:13:00.000","Title":"to measure Contact Angle between 2 edges in an image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I hope this hasn't already been answered but I haven't found this anywhere.\nMy problem is quite simple : I'm trying to compute an oblic profile of an image using scipy.\nOne great way to do it is to : \n\nlocate the segment along which I want my profile by giving the beginning and the end of my segment, \nextract the minimal image containing the segment, \ncompute the angle from which my image should be rotated to get the desired profile along a nice raw, \nextract said raw.\n\nThat's the theory.\nI'm stuck right now on (4), because my profile should fall in raw number array.shape[0]\/2, but rotate seems to add sometimes lines of zeros below the data and columns on the left. The correct raw number can thus be shifted...\nHas anyone any idea how to get the right raw number, for example using the correspondence matrix used by scipy ?\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":131,"Q_Id":17218051,"Users Score":0,"Answer":"After some time of debugging, I realized that depending on the angle - typically under and over n*45 degrees - scipy adds a row and a column to the output image. \na simple test of the angle adding one to the indices solved my problem.\nI hope this can help the future reader of this topic.","Q_Score":1,"Tags":"python,indexing,scipy,rotatetransform,correspondence","A_Id":18653827,"CreationDate":"2013-06-20T15:44:00.000","Title":"finding corresponding pixels before and after scipy.ndimage.interpolate.rotate","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I have a MATLAB function that graphs the trajectory of different divers (the Olympic sport diving) depending on the position of a slider at the bottom of the window. The file takes multiple .mat files (with trajectory information in 3 dimensions) as input. I am trying to put this MATLAB app on to the internet. What would be the easiest\/most efficient way of doing this? I have experience programming in Python and little experience programming in Java.\nHere are the options that I have considered:\n1. MATLAB Builder JA (too expensive)\n2. Rewrite entire MATLAB function into Java (not experienced enough in Java)\n3. Implement MATLAB file using mlabwrapper and using Django to deploy into web app. (having a lot of trouble installing mlabwrapper onto OSX)\n4. Rewrite MATLAB function into Python using SciPy, NumPy, and matlibplot and then using Django.\nI do not have any experience with Django but I am willing to learn it. Can someone point me in the right direction?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":934,"Q_Id":17219344,"Users Score":0,"Answer":"You could always just host the MATLAB code and sample .mat on a website for people to download and play with on their own machines if they have a MATLAB license. If you are looking at having some sort of embedded app on your website you are going to need to rewrite your code in another language. The project sounds doable in python using the packages you mentioned however hosting it online will not be as simple as running a program from your command line. Django would help you build a website but I do not think that it will allow you to just run a python script in the browser.","Q_Score":0,"Tags":"python,django,matlab,web-applications,octave","A_Id":17220530,"CreationDate":"2013-06-20T16:51:00.000","Title":"MATLAB to web app","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hi I have a MATLAB function that graphs the trajectory of different divers (the Olympic sport diving) depending on the position of a slider at the bottom of the window. The file takes multiple .mat files (with trajectory information in 3 dimensions) as input. I am trying to put this MATLAB app on to the internet. What would be the easiest\/most efficient way of doing this? I have experience programming in Python and little experience programming in Java.\nHere are the options that I have considered:\n1. MATLAB Builder JA (too expensive)\n2. Rewrite entire MATLAB function into Java (not experienced enough in Java)\n3. Implement MATLAB file using mlabwrapper and using Django to deploy into web app. (having a lot of trouble installing mlabwrapper onto OSX)\n4. Rewrite MATLAB function into Python using SciPy, NumPy, and matlibplot and then using Django.\nI do not have any experience with Django but I am willing to learn it. Can someone point me in the right direction?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":934,"Q_Id":17219344,"Users Score":1,"Answer":"A cheap and somewhat easy way (with limited functionality) would be:\nInstall MATLAB on your server, or use the MATLAB Compiler to create a stand alone executable (not sure if that comes with your version of MATLAB or not). If you don't have the compiler and can't install MATLAB on your server, you could always go to a freelancing site such as elance.com, and pay someone $20 to compile your code for you into a windows exe file. \nEither way, the end goal is to make your MATLAB function callable from the command line (the server will be doing the calling) You could make your input arguments into the slider value, and the .mat files you want to open, and the compiled version of MATLAB will know how to handle this. Once you do that, have the code create a plot and save an image of it. (using getframe or other figure export tools, check out FEX). Have your server output this image to the client.\nTah-dah, you have a crappy low cost work around! \nI hope this helps , if not, I apologize!","Q_Score":0,"Tags":"python,django,matlab,web-applications,octave","A_Id":17224492,"CreationDate":"2013-06-20T16:51:00.000","Title":"MATLAB to web app","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Do you know how to get the index or column of a DataFrame as a NumPy array or python list?","AnswerCount":8,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":558431,"Q_Id":17241004,"Users Score":75,"Answer":"You can use df.index to access the index object and then get the values in a list using df.index.tolist(). Similarly, you can use df['col'].tolist() for Series.","Q_Score":289,"Tags":"python,pandas","A_Id":17241104,"CreationDate":"2013-06-21T17:25:00.000","Title":"How do I convert a pandas Series or index to a Numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"While trying to compute inverse of a matrix in python using numpy.linalg.inv(matrix), I get singular matrix error. Why does it happen? Has it anything to do with the smallness of the values in the matrix. The numbers in my matrix are probabilities and add up to 1.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":7873,"Q_Id":17257056,"Users Score":2,"Answer":"It may very well have to do with the smallness of the values in the matrix.\nSome matrices that are not, in fact, mathematically singular (with a zero determinant) are totally singular from a practical point of view, in that the math library one is using cannot process them properly.\nNumerical analysis is tricky, as you know, and how well it deals with such situations is a measure of the quality of a matrix library.","Q_Score":1,"Tags":"python,numpy,matrix-inverse","A_Id":17257084,"CreationDate":"2013-06-23T01:56:00.000","Title":"Inverse of a Matrix in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"PEP8 has naming conventions for e.g. functions (lowercase), classes (CamelCase) and constants (uppercase).\nIt seems to me that distinguishing between numpy arrays and built-ins such as lists is probably more important as the same operators such as \"+\" actually mean something totally different.\nDoes anyone have any naming conventions to help with this?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":415,"Q_Id":17270293,"Users Score":1,"Answer":"numpy arrays and lists should occupy similar syntactic roles in your code and as such I wouldn't try to distinguish between them by naming conventions. Since everything in python is an object the usual naming conventions are there not to help distinguish type so much as usage. Data, whether represented in a list or a numpy.ndarray has the same usage.\nI agree that it's awkward that eg. + means different things for lists and arrays. I implicitly deal with this by never putting anything like numerical data in a list but rather always in an array. That way I know if I want to concatenate blocks of data I should be using numpy.hstack. That said, there are definitely cases where I want to build up a list through concatenation and turn it into a numpy array when I'm done. In those cases the code block is usually short enough that it's clear what's going on. Some comments in the code never hurt.","Q_Score":2,"Tags":"python,numpy,naming-conventions","A_Id":17270654,"CreationDate":"2013-06-24T07:39:00.000","Title":"how do you distinguish numpy arrays from Python's built-in objects","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"PEP8 has naming conventions for e.g. functions (lowercase), classes (CamelCase) and constants (uppercase).\nIt seems to me that distinguishing between numpy arrays and built-ins such as lists is probably more important as the same operators such as \"+\" actually mean something totally different.\nDoes anyone have any naming conventions to help with this?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":415,"Q_Id":17270293,"Users Score":2,"Answer":"You may use a prefix np_ for numpy arrays, thus distinguishing them from other variables.","Q_Score":2,"Tags":"python,numpy,naming-conventions","A_Id":17270547,"CreationDate":"2013-06-24T07:39:00.000","Title":"how do you distinguish numpy arrays from Python's built-in objects","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am thinking about a problem I haven't encountered before and I'm trying to determine the most efficient algorithm to use.\nI am iterating over two lists, using each pair of elements to calculate a value that I wish to sort on. My end goal is to obtain the top twenty results. I could store the results in a third list, sort that list by absolute value, and simply slice the top twenty, but that is not ideal. \nSince these lists have the potential to become extremely large, I'd ideally like to only store the top twenty absolute values, evicting old values as a new top value is calculated. \nWhat would be the most efficient way to implement this in python?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":911,"Q_Id":17300419,"Users Score":1,"Answer":"Have a list of size 20 tupples initialised with less than the minimum result of the calculation and two indices of -1. On calculating a result append it to the results list, with the indices of the pair that resulted, sort on the value only and trim the list to length 20. Should be reasonably efficient as you only ever sort a list of length 21.","Q_Score":7,"Tags":"python,algorithm,sorting","A_Id":17300620,"CreationDate":"2013-06-25T14:43:00.000","Title":"Python Sort On The Fly","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a vanilla pandas dataframe with an index. I need to check if the index is sorted. Preferably without sorting it again.\ne.g. I can test an index to see if it is unique by index.is_unique() is there a similar way for testing sorted?","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":16454,"Q_Id":17315881,"Users Score":84,"Answer":"How about:\ndf.index.is_monotonic","Q_Score":44,"Tags":"python,pandas","A_Id":17347945,"CreationDate":"2013-06-26T09:07:00.000","Title":"How can I check if a Pandas dataframe's index is sorted","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am wondering whether there is any computational or storage disadvantage to using Panels instead of multi-indexed DataFrames in pandas.\nOr are they the same behind the curtain?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":169,"Q_Id":17353773,"Users Score":0,"Answer":"they have a similiar storage mechanism, and only really differ in the indexing scheme. Performance wise they should be similar. There is more support (code-wise) for multi-level df's as they are more often used. In addition Panels have different silicing semantics, so dtype guarantees are different.","Q_Score":2,"Tags":"python,pandas,data-analysis","A_Id":17370686,"CreationDate":"2013-06-27T21:46:00.000","Title":"Are pandas Panels as efficient as multi-indexed DataFrames?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"From what I've read about Numpy arrays, they're more memory efficient that standard Python lists. What confuses me is that when you create a numpy array, you have to pass in a python list. I assume this python list gets deconstructed, but to me, it seems like it defeats the purpose of having a memory efficient data structure if you have to create a larger inefficient structure to create the efficient one. \nDoes numpy.zeros get around this?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":4676,"Q_Id":17371059,"Users Score":1,"Answer":"Numpy in general is more efficient if you pre-allocate the size. If you know you're going to be populating an MxN matrix...create it first then populate as opposed to using appends for example.\nWhile the list does have to be created, a lot of the improvement in efficiency comes from acting on that structure. Reading\/writing\/computations\/etc.","Q_Score":4,"Tags":"python,numpy","A_Id":17371090,"CreationDate":"2013-06-28T18:08:00.000","Title":"numpy array memory allocation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm basically trying to plot some images based on a given set of parameters of a .fits file. However, this made me curious: what IS a .fits array? When I type in img[2400,3456] or some random values in the array, I get some output. \nI guess my question is more conceptual than code-based, but, it boils down to this: what IS a .fits file, and what do the arrays and the outputs represent?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":119,"Q_Id":17376904,"Users Score":1,"Answer":"A FITS file consists of header-data units. A header-data unit contains an ASCII-type header with\nkeyword-value-comment triples plus either binary FITS tables or (hyperdimensional) image cubes.\nEach entry in a table of a binary FITS table may itself contain hyperdimensional image cubes. An array\nis some slice through some dimensions of any of these cubes.\nNow as a shortcut to images stored in the first (a.k.a primary) header-data unit, many viewers\nallow to indicate in square brackets some indices of windows into these images (which in most\ncommon cases is based on the equivalent support by the cfitsio library).","Q_Score":0,"Tags":"python,arrays,astronomy,fits","A_Id":19917484,"CreationDate":"2013-06-29T05:07:00.000","Title":"What IS a .fits file, as in, what is a .fits array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"First of all, I am aware that matrix and array are two different data types in NumPy. But I put both in the title to make it a general question. If you are editing this question, please feel free to remove one. Ok, here is my question,\nHere is an edit to the original question. Consider a Markov Chain with a 2 dimensional state vector x_t=(y_t,z_t) where y_t and z_t are both scalars. What is the best way of representing\/storing\/manipulating transition matrix of this Markov Chain?\nNow, what I explained is a simplified version of my problem. My Markov Chain state vector is a 5*1 vector. \nHope this clarifies","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":846,"Q_Id":17416448,"Users Score":2,"Answer":"Let's say you're trying to use a Markov chain to model english sentence syntax. Your transition matrix will give you the probability of going from one part of speech to another part of speech. Now let's suppose that we're using a 3rd-order Markov model. This would give use the probability of going from state 123 to 23X, where X is a valid state. \nThe Markov transition matrix would be N3 x N, which is still a 2-dimensional matrix regardless of the dimensionality of the states, themselves. If you're generating the probability distributions based on empirical evidence, then, in this case, there's going to be states with probability 0. \nIf you're worried about sparsity, perhaps arrays are not the best choice. Instead of using an array of arrays, perhaps you should use a dictionary of dictionaries. Or if you have many transition matrices, an array of dictionaries of dictionaries.\nEDIT (based off comment):\nYou're right, that is more complicated. Nonetheless, for any state, (i,j), there exists a probability distribution for going to the next state, (m,n). Hence, we have our \"outer\" dictionary, whose keys are all the possible states. Each key (state) points to a value that is a dictionary, which holds the probability distribution for that state.","Q_Score":2,"Tags":"python,numpy,linear-algebra,multidimensional-array","A_Id":17416531,"CreationDate":"2013-07-02T02:21:00.000","Title":"Multiplication of Multidimensional matrices (arrays) in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a traffic study and I have the following problem:\nI have a CSV file that contains time-stamps and license plate numbers of cars for a location and another CSV file that contains the same thing. I am trying to find matching license plates between the two files and then find the time difference between the two. I know how to match strings but is there a way I can find matches that are close maybe to detect user input error of the license plate number?\nEssentially the data looks like the following:\n\nA = [['09:02:56','ASD456'],...]\nB = [...,['09:03:45','ASD456'],...]\n\nAnd I want to find the time difference between the two sightings but say if the data was entered slightly incorrect and the license plate for B says 'ASF456' that it will catch that","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1137,"Q_Id":17456233,"Users Score":1,"Answer":"What you're asking is about a fuzzy search, from what it sounds like. Instead of checking string equality, you can check if the two string being compared have a levenshtein distance of 1 or less. Levenshtein distance is basically a fancy way of saying how many insertions, deletions or changes will it take to get from word A to B. This should account for small typos.\nHope this is what you were looking for.","Q_Score":0,"Tags":"python,comparison","A_Id":17456347,"CreationDate":"2013-07-03T19:09:00.000","Title":"Python Matching License Plates","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pandas dataFrame created through a mysql call which returns the data as object type.\nThe data is mostly numeric, with some 'na' values.\nHow can I cast the type of the dataFrame so the numeric values are appropriately typed (floats) and the 'na' values are represented as numpy NaN values?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":4908,"Q_Id":17457418,"Users Score":1,"Answer":"df = df.convert_objects(convert_numeric=True) will work in most cases.\nI should note that this copies the data. It would be preferable to get it to a numeric type on the initial read. If you post your code and a small example, someone might be able to help you with that.","Q_Score":4,"Tags":"python,numpy,pandas","A_Id":17457967,"CreationDate":"2013-07-03T20:21:00.000","Title":"Converting Pandas Dataframe types","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Libsvm in a 5x2 cross validation to classify a very huge amount of data, that is, I have 47k samples for training and 47k samples for testing in 10 different configurations.\nI usually use the Libsvm's script easy.py to classify the data, but it's taking so long, I've been waiting for results for more than 3 hours and nothing, and I still have to repeat this procedure more 9 times!\ndoes anybody know how to use the libsvm faster with a very huge amount of data? does the C++ Libsvm functions work faster than the python functions?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3432,"Q_Id":17457460,"Users Score":0,"Answer":"easy.py is a script for training and evaluating a classifier. it does a metatraining for the SVM parameters with grid.py. in grid.py is a parameter \"nr_local_worker\" which is defining the mumber of threads. you might wish to increase it (check processor load).","Q_Score":3,"Tags":"python,c++,svm,libsvm","A_Id":18509671,"CreationDate":"2013-07-03T20:24:00.000","Title":"Large training and testing data in libsvm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an empirical network with 585 nodes and 5,441 edges. This is a scale-free network with max node degree of 179 and min node degree of 1. I am trying to create an equivalent random graph (using random_degree_sequence_graph from networkx), but my python just keeps running. I did similar exercise fro the network with 100 nodes - it took just a second to create a random graph. But with 585 nodes it takes forever. The result of is_valid_degree_sequence command is True. Is it possible that python goes into some infinite loop with my degree sequence or does it actually take a very long time (more than half hour) to create a graph of such size? Please let me know if anyone has had any experience with this. I am using Python 2.7.4.\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":532,"Q_Id":17464274,"Users Score":1,"Answer":"That algorithm's run time could get very long for some degree sequences. And it is not guaranteed to produce a graph. Depending on your end use you might consider using the configuration_model(). Though it doesn't sample graphs uniformly at random and might produce parallel edges and self loops it will always finish.","Q_Score":0,"Tags":"python,networkx","A_Id":17471244,"CreationDate":"2013-07-04T07:28:00.000","Title":"An issue with generating a random graph with given degree sequence: time consuming or some error?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I accidentally installed python 2.7 again on my mac (mountain lion), when trying to install scipy using macports:\n\nsudo port install py27-scipy\n---> Computing dependencies for py27-scipy\n---> Dependencies to be installed: SuiteSparse gcc47 cctools cctools-headers llvm-3.3 libffi llvm_select cloog gmp isl gcc_select\n ld64 libiconv libmpc mpfr libstdcxx ppl glpk zlib py27-nose\n nosetests_select py27-setuptools python27 bzip2 db46 db_select\n gettext expat ncurses libedit openssl python_select sqlite3 py27-numpy\n fftw-3 swig-python swig pcre\n\nI am still using my original install of python (and matplotlib and numpy etc), without scipy. How do I remove this new version? It is taking up ~2Gb space.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":609,"Q_Id":17486322,"Users Score":0,"Answer":"How about sudo port uninstall python27?","Q_Score":0,"Tags":"python,scipy,reinstall","A_Id":17488424,"CreationDate":"2013-07-05T10:10:00.000","Title":"Installing python (same version) on accident twice","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Can't get the titles right in matplotlib: \n'technologie\u00ebn in \u00b0C' gives: technologie\u00c3n in \u00c3C\nPossible solutions already tried:\n\nu'technologie\u00ebn in \u00b0C' doesn't work\nneither does: # -*- coding: utf-8 -*- at the beginning of the code-file.\n\nAny solutions?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6904,"Q_Id":17525882,"Users Score":0,"Answer":"In Python3, there is no need to worry about all that troublesome UTF-8 problems.\nOne note that you will need to set a Unicode font before plotting.\nmatplotlib.rc('font', family='Arial')","Q_Score":5,"Tags":"python,matplotlib,unicode,python-2.x","A_Id":42853468,"CreationDate":"2013-07-08T11:50:00.000","Title":"How to pass Unicode title to matplotlib?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"TFIDFVectorizer takes so much memory ,vectorizing 470 MB of 100k documents takes over 6 GB , if we go 21 million documents it will not fit 60 GB of RAM we have.\nSo we go for HashingVectorizer but still need to know how to distribute the hashing vectorizer.Fit and partial fit does nothing so how to work with Huge Corpus?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":4523,"Q_Id":17536394,"Users Score":1,"Answer":"One way to overcome the inability of HashingVectorizer to account for IDF is to index your data into elasticsearch or lucene and retrieve termvectors from there using which you can calculate Tf-IDF.","Q_Score":3,"Tags":"python,numpy,machine-learning,scipy,scikit-learn","A_Id":28424354,"CreationDate":"2013-07-08T21:36:00.000","Title":"How can i reduce memory usage of Scikit-Learn Vectorizers?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm proficient in Python but a noob at Scala. I'm about to write some dirty experiment code in Scala, and came across the thought that it would be really handy if Scala had a function like help() in Python. For example, if I wanted to see the built-in methods for a Scala Array I might want to type something like help(Array), just like I would type help(list) in Python. Does such a thing exist for Scala?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":2972,"Q_Id":17536758,"Users Score":1,"Answer":"Similarly, IDEA has its \"Quick Documentation Look-up\" command, which works for Scala as well as Java (-Doc) JARs and source-code documentation comments.","Q_Score":11,"Tags":"python,scala,equivalent","A_Id":17538468,"CreationDate":"2013-07-08T22:03:00.000","Title":"Scala equivalent of Python help()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm proficient in Python but a noob at Scala. I'm about to write some dirty experiment code in Scala, and came across the thought that it would be really handy if Scala had a function like help() in Python. For example, if I wanted to see the built-in methods for a Scala Array I might want to type something like help(Array), just like I would type help(list) in Python. Does such a thing exist for Scala?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2972,"Q_Id":17536758,"Users Score":0,"Answer":"In scala , you can try using the below ..( similar to the one we have in python )..\nhelp(RDD1) in python will give you the rdd1 description with full details.\nScala > RDD1.[tab] \nOn hitting tab you will find the list of options available to the specified RDD1, similar option you find in eclipse .","Q_Score":11,"Tags":"python,scala,equivalent","A_Id":55942086,"CreationDate":"2013-07-08T22:03:00.000","Title":"Scala equivalent of Python help()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a working conjungate gradient method implementation in pycuda, that I want to optimize. It uses a self written matrix-vector-multiplication and the pycuda-native gpuarray.dot and gpuarray.mul_add functions\nProfiling the program with kernprof.py\/line_profiler returned most time (>60%) till convergence spend in one gpuarray.dot() call. (About .2 seconds)\nAll following calls of gpuarray.dot() take about 7 microseconds. All calls have the same type of input vectors (size: 400 doubles)\nIs there any reason why? I mean in the end it's just a constant, but it is making the profiling difficult.\nI wanted to ask the question at the pycuda mailing list. However I wasn't able to subscribe with an @gmail.com adress. If anyone has either an explanation for the strange .dot() behavior or my inability to subscribe to that mailing list please give me a hint ;)","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":748,"Q_Id":17574547,"Users Score":2,"Answer":"One reason would be that Pycuda is compiling the kernel before uploading it. As far as I remember thought that should happen only the very first time it executes it.\nOne solution could be to \"warm up\" the kernel by executing it once and then start the profiling procedure.","Q_Score":1,"Tags":"python,cuda,pycuda,mailing-list","A_Id":17581063,"CreationDate":"2013-07-10T15:19:00.000","Title":"pycuda.gpuarray.dot() very slow at first call","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need some help from python programmers to solve the issue I'm facing in processing data:-\n\nI have .csv files placed in a directory structure like this:-\n-MainDirectory\n\nSub directory 1\n\nsub directory 1A\n\nfil.csv\n\n\nSub directory 2\n\nsub directory 2A\n\nfile.csv\n\n\nsub directory 3\n\nsub directory 3A\n\nfile.csv\n\n\n\nInstead of going into each directory and accessing the .csv files, I want to run a script that can combine the data of the all the sub directories. \n\nEach file has the same type of header. And I need to maintain 1 big .csv file with one header only and all the .csv file data can be appended one after the other. \nI have the python script that can combine all the files in a single file but only when those files are placed in one folder. \nCan you help to provide a script that can handle the above directory structure?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1781,"Q_Id":17586573,"Users Score":0,"Answer":"you can use os.listdir() to get list of files in directory","Q_Score":3,"Tags":"python,csv,python-2.7","A_Id":17587872,"CreationDate":"2013-07-11T06:34:00.000","Title":"Python - Combing data from different .csv files. into one","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The case is, I have a 2D array and can convert it to a plot. How can I read the y value of a point with given x?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":17658836,"Users Score":0,"Answer":"Just access the input data that you used to generate the plot. Either this is a mathematical function which you can just evaluate for a given x or this is a two-dimensional data set which you can search for any given x. In the latter case, if x is not contained in the data set, you might want to interpolate or throw an error.","Q_Score":0,"Tags":"python,matplotlib","A_Id":17659174,"CreationDate":"2013-07-15T16:12:00.000","Title":"How to read out point position of a given plot in matplotlib?(without using mouse)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Have a 2-dimensional array, like -\n\na[0] = [ 0 , 4 , 9 ]\na[1] = [ 2 , 6 , 11 ]\na[2] = [ 3 , 8 , 13 ]\na[3] = [ 7 , 12 ]\nNeed to select one element from each of the sub-array in a way that the resultant set of numbers are closest, that is the difference between the highest number and lowest number in the set is minimum.\nThe answer to the above will be = [ 9 , 6 , 8 , 7 ].\nHave made an algorithm, but don't feel its a good one.\nWhat would be a efficient algorithm to do this in terms of time and space complexity?\nEDIT - My Algorithm (in python)-\n\nINPUT - Dictionary : table{}\nOUTPUT - Dictionary : low_table{}\n#\nN = len(table)\nfor word_key in table:\n for init in table[word_key]:\n temp_table = copy.copy(table)\n del temp_table[word_key]\n per_init = copy.copy(init)\n low_table[init]=[]\n for ite in range(N-1):\n min_val = 9999\n for i in temp_table:\n for nums in temp_table[i]:\n if min_val > abs(init-nums):\n min_val = abs(init-nums)\n del_num = i\n next_num = nums\n low_table[per_init].append(next_num)\n init = (init+next_num)\/2\n del temp_table[del_num]\nlowest_val = 99\nlowest_set = []\nfor x in low_table:\n low_table[x].append(x)\n low_table[x].sort()\n mini = low_table[x][-1]-low_table[x][0]\n if mini < lowest_val:\n lowest_val = mini\n lowest_set = low_table[x]\nprint lowest_set","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":667,"Q_Id":17667022,"Users Score":11,"Answer":"collect all the values to create a single ordered sequence, with each element tagged with the array it came from:\n0(0), 2(1), 3(2), 4(0), 6(1), ... 12(3), 13(2)\nthen create a window across them, starting with the first (0(0)) and ending it at the first position that makes the window span all the arrays (0(0) -> 7(3))\nthen roll this window by incrementing the start of the window by one, and increment the end of the window until you again have a window that covers all elements.\nthen roll it again: (2(1), 3(2), 4(0), ... 7(3)), and so forth.\nat each step keep track of the the difference between the largest and the smallest. Eventually you find the one with the smallest window. I have the feeling that in the worst case this is O(n^2) but that's just a guess.","Q_Score":14,"Tags":"python,algorithm","A_Id":17667109,"CreationDate":"2013-07-16T02:26:00.000","Title":"Finding N closest numbers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm having a little bug in my program where you give the computer a random number to try to guess, and a range to guess between and the amount of guesses it has. After the computer generates a random number, it asks you if it is your number, if not, it asks you if it is higher or lower than it. My problem is, if your number is 50, and it generates 53, you would say \"l\" or \"lower\" or something that starts with \"l\". Then it would generate 12 or something, you would say \"higher\" or something, and it might give you 72. How could I make it so that it remembers to be lower than 53?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":53,"Q_Id":17716737,"Users Score":3,"Answer":"Create 2 variables that contain the lowest and highest possible values. Whenever you get a response, store it in the appropriate variable. Make the RNG pick a value between the two.","Q_Score":1,"Tags":"python,random","A_Id":17716804,"CreationDate":"2013-07-18T07:04:00.000","Title":"Random number indexing past inputs Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have the following problem:\nThere are 12 samples around 20000 elements each from unknown distributions (sometimes the distributions are not uni-modal so it's hard to automatically estimate an analytical family of the distributions).\nBased on these distributions I compute different quantities. How can I explore the distribution of the target quantity in the most efficient (and simplest) way?\nTo be absolutely clear, here's a simple example: quantity A is equal to B*C\/D\nB,C,D are distributed according to unknown laws but I have samples from their distributions and based on these samples I want to compute the distribution of A.\nSo in fact what I want is a tool to explore the distribution of the target quantity based on samples of the variables.\nI know that there are MCMC algorithms to do that. But does anybody know a good implementation of an MCMC sampler in Python or C? Or are there any other ways to solve the problem?\nMaxim","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":838,"Q_Id":17740281,"Users Score":0,"Answer":"The simplest way to explore the distribution of A is to generate samples based on the samples of B, C, and D, using your rule. That is, for each iteration, draw one value of B, C, and D from their respective sample sets, independently, with repetition, and calculate A = B*C\/D.\nIf the sample sets for B, C, and D have the same size, I recommend generating a sample for A of the same size. Much fewer samples would result in loss of information, much more samples would not gain much. And yes, even though many samples will not be drawn, I still recommend drawing with repetition.","Q_Score":1,"Tags":"python,c,statistics,mcmc","A_Id":19030208,"CreationDate":"2013-07-19T07:21:00.000","Title":"MCMC implementation in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a program for learning Artificial Neural Network and it takes a 2-d numpy array as training data. The size of the data array I want to use is around 300,000 x 400 floats. I can't use chunking here because the library I am using (DeepLearningTutorials) takes a single numpy array as training data.\nThe code shows MemoryError when the RAM usage is around 1.6Gb by this process(I checked it in system monitor) but I have a total RAM of 8GB. Also, the system is Ubuntu-12.04 32-bit.\nI checked for the answers ofor other similar questions but somewhere it says that there is nothing like allocating memory to your python program and somewhere the answer is not clear as to how to increase the process memory.\nOne interesting thing is I am running the same code on a different machine and it can take a numpy array of almost 1,500,000 x 400 floats without any problem. The basic configurations are similar except that the other machine is 64-bit and this one is 32-bit.\nCould someone please give some theoretical answer as to why there is so much difference in this or is this the only reason for my problem?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":452,"Q_Id":17756791,"Users Score":2,"Answer":"A 32-bit OS can only address up to aroung 4gb of ram, while a 64-bit OS can take advantage of a lot more ram (theoretically 16.8 million terabytes). Since your OS is 32-bit, your OS can only take advantage of 4gb, so your other 4gb isn't used.\nThe other 64-bit machine doesn't have the 4gb ram limit, so it can take advantage of all of its installed ram.\nThese limits come from the fact that a 32-bit machine can only store memory address (pointers) of 32-bytes, so there are 2^32 different possible memory locations that the computer can identify. Similarly, a 64-bit machine can identify 2^64 different possible memory locations, so it can address 2^64 different bytes.","Q_Score":2,"Tags":"python,numpy,ubuntu-12.04,32-bit","A_Id":17756813,"CreationDate":"2013-07-19T23:01:00.000","Title":"Python Process using only 1.6 GB RAM Ubuntu 32 bit in Numpy Array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"suppose I have a dataframe with index as monthy timestep, I know I can use dataframe.groupby(lambda x:x.year) to group monthly data into yearly and apply other operations. Is there some way I could quick group them, let's say by decade? \nthanks for any hints.","AnswerCount":4,"Available Count":1,"Score":0.2449186624,"is_accepted":false,"ViewCount":31949,"Q_Id":17764619,"Users Score":5,"Answer":"if your Data Frame has Headers say : DataFrame ['Population','Salary','vehicle count']\nMake your index as Year: DataFrame=DataFrame.set_index('Year')\nuse below code to resample data in decade of 10 years and also gives you some of all other columns within that dacade \ndatafame=dataframe.resample('10AS').sum()","Q_Score":15,"Tags":"python,pandas","A_Id":54003707,"CreationDate":"2013-07-20T17:12:00.000","Title":"pandas dataframe group year index by decade","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are there implementations available for any co-clustering algorithms in python? The scikit-learn package has k-means and hierarchical clustering but seems to be missing this class of clustering.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":720,"Q_Id":17767807,"Users Score":0,"Answer":"The fastest clustering algorithm I know of does this:\nRepeat O(log N) times:\nC = M x X\nWhere X is N x dim and M is clus x N...\nIf your clusters are not \"flat\"...\nPerform f(X) = ... This just projects X onto some \"flat\" space...","Q_Score":3,"Tags":"python,machine-learning,scipy,scikit-learn,unsupervised-learning","A_Id":17768482,"CreationDate":"2013-07-20T23:55:00.000","Title":"Co-clustering algorithm in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have data which is of the gaussian form when plotted as histogram. I want to plot a gaussian curve on top of the histogram to see how good the data is. I am using pyplot from matplotlib. Also I do NOT want to normalize the histogram. I can do the normed fit, but I am looking for an Un-normalized fit. Does anyone here know how to do it?\nThanks!\nAbhinav Kumar","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":12313,"Q_Id":17779316,"Users Score":3,"Answer":"Another way of doing this is to find the normalized fit and multiply the normal distribution with (bin_width*total length of data)\nthis will un-normalize your normal distribution","Q_Score":6,"Tags":"python,matplotlib,histogram,gaussian","A_Id":20057520,"CreationDate":"2013-07-22T03:04:00.000","Title":"Un-normalized Gaussian curve on histogram","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the ODRPACK library in Python to fit some 1d data. It works quite well, but I have one question: is there any possibility to make constraints on the fitting parameters? For example if I have a model y = a * x + b and for physical reasons parameter a can by only in range (-1, 1). I've found that such constraints can be done in original Fortran implementation of the ODRPACK95 library, but I can't find how to do that in Python.\nOf course, I can implement my functions such that they will return very big values, if the fitting parameters are out of bounds and chi squared will be big too, but I wonder if there is a right way to do that.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":780,"Q_Id":17783481,"Users Score":3,"Answer":"I'm afraid that the older FORTRAN-77 version of ODRPACK wrapped by scipy.odr does not incorporate constraints. ODRPACK95 is a later extension of the original ODRPACK library that predates the scipy.odr wrappers, and it is unclear that we could legally include it in scipy. There is no explicit licensing information for ODRPACK95, only the general ACM TOMS non-commercial license.","Q_Score":3,"Tags":"python,scipy,curve-fitting","A_Id":17786438,"CreationDate":"2013-07-22T09:01:00.000","Title":"Constraints on fitting parameters with Python and ODRPACK","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Sorry about the title, SO wasn't allowing the word \"problem\" in it. I have the following problem:\nI have packages of things I want to sell, and each package has a price. When someone requests things X, Y and Z, I want to look through all the packages, some of which contain more than one item, and give the user the combination of packages that will cover their things and have the minimal price.\nFor example, I might suggest [(X, Y), (Z, Q)] for $10, since [(X), (Y), (Z)] costs $11. Since these are prices, I can't use the greedy weighted set cover algorithm, because two people getting the same thing for different prices would be bad.\nHowever, I haven't been able to find a paper (or anything) detailing an optimal algorithm for the weighted set cover problem. Can someone help with an implementation (I'm using Python), a paper, or even a high-level description of how it works?\nThe packages I have are in the hundreds, and the things are 5-6, so running time isn't really an issue.\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1472,"Q_Id":17813029,"Users Score":2,"Answer":"If you want an exponential algorithm, just try every subset of the set of packages and take the cheapest one that contains all the things you need.","Q_Score":0,"Tags":"python,algorithm,set,cover","A_Id":17813855,"CreationDate":"2013-07-23T14:25:00.000","Title":"An optimal algorithm for the weighted set cover issue?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a set of points for which I want to construct KD Tree. After some time I want to add few more points to this KDTree periodically. Is there any way to do this in scipy implementation","AnswerCount":1,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":6831,"Q_Id":17817889,"Users Score":20,"Answer":"The problem with k-d-trees is that they are not designed for updates.\nWhile you can somewhat easily insert objects (if you use a pointer based representation, which needs substantially more memory than an array-based tree), and do deletions with tricks such as tombstone messages, doing such changes will degrate the performance of the tree.\nI am not aware of a good method for incrementally rebalancing a k-d-tree. For 1-dimensional trees you have red-black-trees, B-trees, B*-trees, B+-trees and such things. These don't obviously work with k-d-trees because of the rotating axes and thus different sorting. So in the end, with a k-d-tree, it may be best to just collect changes, and from time to time do a full tree rebuild. Then at least this part of the tree will be quite good.\nHowever, there exists a similar structure (that in my experiments often outperforms the k-d-tree!): the R*-tree. Instead of performing binary splits, it uses rectangular bounding boxes to collect objects, and a lot of thought was put into making the tree a dynamic data structure. This is also where the R*-tree performs much better than the R-tree: it has a much more clever split for kNN search, and it performs incremental rebalancing to improve its structure.","Q_Score":22,"Tags":"python,scipy,kdtree","A_Id":17822258,"CreationDate":"2013-07-23T18:13:00.000","Title":"Is there any way to add points to KD tree implementation in Scipy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am loading a text file into pandas, and have a field that contains year. I want to make sure that this field is a string when pulled into the dataframe. \nI can only seem to get this to work if I specify the exact length of the string using the code below:\ndf = pd.read_table('myfile.tsv', dtype={'year':'S4'})\nIs there a way to do this without specifying length? I will need to perform this action on different columns that vary in length.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4022,"Q_Id":17822595,"Users Score":1,"Answer":"I believe we enabled in 0.12\nyou can pass str,np.str_,object in place of an S4\nwhich all convert to object dtype in any event\nor after you read it in\ndf['year'].astype(object)","Q_Score":0,"Tags":"python,pandas","A_Id":17822815,"CreationDate":"2013-07-23T23:17:00.000","Title":"Specify DataType using read_table() in Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In python and igraph I have many nodes with high degree. I always need to consider the edges from a node in order of their weight. It is slow to sort the edges each time I visit the same node. Is there some way to persuade igraph to always give the edges from a node in weight sorted order, perhaps by some preprocessing?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":17844688,"Users Score":0,"Answer":"As far as I understand, you wont have access to the C backend from Python. What about storing the sorted edge in an attribute of the vertices eg in g.vs[\"sortedOutEdges\"] ?","Q_Score":0,"Tags":"python,igraph","A_Id":17845330,"CreationDate":"2013-07-24T20:58:00.000","Title":"Fast processing","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to compare two histograms by having the Y axis show the percentage of each column from the overall dataset size instead of an absolute value. Is that possible? I am using Pandas and matplotlib.\nThanks","AnswerCount":6,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":86215,"Q_Id":17874063,"Users Score":19,"Answer":"I know this answer is 6 years later but to anyone using density=True (the substitute for the normed=True), this is not doing what you might want to. It will normalize the whole distribution so that the area of the bins is 1. So if you have more bins with a width < 1 you can expect the height to be > 1 (y-axis). If you want to bound your histogram to [0;1] you will have to calculate it yourself.","Q_Score":78,"Tags":"python,pandas,matplotlib","A_Id":58946534,"CreationDate":"2013-07-26T06:04:00.000","Title":"Is there a parameter in matplotlib\/pandas to have the Y axis of a histogram as percentage?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a non-technical client who has some hierarchical product data that I'll be loading into a tree structure with Python. The tree has a variable number of levels, and a variable number nodes and leaf nodes at each level.\nThe client already knows the hierarchy of products and would like to put everything into an Excel spreadsheet for me to parse.\nWhat format can we use that allows the client to easily input and maintain data, and that I can easily parse into a tree with Python's CSV? Going with a column for each level isn't without its hiccups (especially if we introduce multiple node types)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":13410,"Q_Id":17900112,"Users Score":0,"Answer":"If spreadsheet is a must in this solution, hierarchy can be represented by indents on the Excel side (empty cells at the beginnings of rows), one row per node\/leaf. On the Python side, one can parse them to tree structure (of course, one needs to filter out empty rows and some other exceptions). Node type can be specified on it's own column. For example, it could even be the first non-empty cell.\nI guess, hierarchy level is limited (say, max 8 levels), otherwise Excel is not good idea at all.\nAlso, there is a library called openpyxl, which can help reading Excel files directly, without user needing to convert them to CSV (it adds usability to the overall approach).\nAnother approach is to put a level number in the first cell. The number should never be incremented by 2 or more.\nYet another approach is to use some IDs for each node and each node leaf would need to specify parent's id. But this is not very user-friendly.","Q_Score":14,"Tags":"python,excel,csv,tree,hierarchy","A_Id":17900531,"CreationDate":"2013-07-27T16:38:00.000","Title":"Represent a tree hierarchy using an Excel spreadsheet to be easily parsed by Python CSV reader?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried installing pandas using easy_install and it claimed that it successfully installed the pandas package in my Python Directory.\nI switch to IDLE and try import pandas and it throws me the following error - \n\nTraceback (most recent call last):\n File \"\", line 1, in \n import pandas\n File \"C:\\Python27\\lib\\site-packages\\pandas-0.12.0-py2.7-win32.egg\\pandas\\__init__.py\", line 6, in \n from . import hashtable, tslib, lib\n File \"numpy.pxd\", line 157, in init pandas.hashtable (pandas\\hashtable.c:20282)\nValueError: numpy.dtype has the wrong size, try recompiling\n\nPlease help me diagnose the error.\nFYI: I have already installed the numpy package","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26604,"Q_Id":17904600,"Users Score":0,"Answer":"Panda does not work with python 2.7 , do you will need python 3.6 or higer","Q_Score":3,"Tags":"python-2.7,pandas,easy-install","A_Id":61605458,"CreationDate":"2013-07-28T03:06:00.000","Title":"Pandas import error","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I divide a list into smaller not evenly sized intervals, give the ideal initial and final values of each interval?\nI have a list of 16383 items. I also have a separate list of the values at which each interval should end and the following should enter.\nI would need to use the given intervals to assign each element to the partition it belongs to, depending on its value.\nI have tried reading stuff, but I encountered only the case when given the original list, people split it into evenly sized partitions...\nThanks\nBlaise","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":141,"Q_Id":17925460,"Users Score":0,"Answer":"For each range in your limits list create an empty list plus one for the overflow values as a tupple with the max value in the list and the min value for that list, the last one will have a max on None\nFor each value in the values list run through your tupples until you find the one that your value is > min and < max or the max is None. \nWhen you find the right list append the value to it and go on to the next.","Q_Score":0,"Tags":"python,arrays,slice","A_Id":17938780,"CreationDate":"2013-07-29T13:32:00.000","Title":"Dividing an array into partitions NOT evenly sized, given the points where each partition should start or end, in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I currently have a function PushLogUtility(p,w,f) that I am looking to optimise w.r.t f (2xk) list for fixed p (9xk list) and w (2xk) list.\nI am using the scipy.optimize.fmin function but am getting errors I believe because f is 2-dimensional. I had written a previous function LogUtility(p,q,f) passing a 1-dimensional input and it worked.\nOne option it seems is to write the p, w and f into 1-dimensional lists but this would be time-consuming and less readable. Is there any way to make fmin optimise a function with a 2D input?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":380,"Q_Id":17950492,"Users Score":0,"Answer":"It seems it is in fact impossible to pass a 2D list to numpy.optimize.fmin. However flattening the input f was not that much of a problem and while it makes the code slightly uglier, the optimisation now works.\nInterestingly I also coded the optimisation in Matlab which does take 2D inputs to its fminsearch function. Both programs give the same output (y).","Q_Score":0,"Tags":"python,optimization,scipy","A_Id":17951581,"CreationDate":"2013-07-30T14:57:00.000","Title":"Passing 2D argument into numpy.optimize.fmin error","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have Python 3.3 and 2.7 installed on my computer\nFor Python 3.3, I installed many libraries like numpy, scipy, etc\nSince I also want to use opencv, which only supports python 2.7 so far, I installed opencv under Python 2.7.\nHey, here comes the problem, what if I want to import numpy as well as cv in the same script?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1449,"Q_Id":17961391,"Users Score":1,"Answer":"You'll have to install all the libraries you want to use together with OpenCV for Python 2.7. This is not much of a problem, you can do it with pip in one line, or choose one of the many pre-built scientific Python packages.","Q_Score":0,"Tags":"python,opencv","A_Id":17971361,"CreationDate":"2013-07-31T04:04:00.000","Title":"OpenCV Python 3.3","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a huge list of company names and a huge list of zipcodes associated with those names. (>100,000). \nI have to output similar names (for example, AJAX INC and AJAX are the same company, I have chosen a threshold of 4 characters for edit distance), but only if their corresponding zipcodes match too. \nThe trouble is that I can put all these company names in a dictionary, and associate a list of zipcode and other characteristics with that dictionary key. However, then I have to match each pair, and with O(n^2), it takes forever. Is there a faster way to do it?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":164,"Q_Id":18023356,"Users Score":1,"Answer":"Create a dictionary keyed by zipcode, with lists of company names as the values. Now you only have to match company names per zipcode, a much smaller search space.","Q_Score":0,"Tags":"python,levenshtein-distance","A_Id":18023402,"CreationDate":"2013-08-02T18:02:00.000","Title":"Disambiguation of Names using Edit Distance","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been wondering... If I am reading, say, a 400MB csv file into a pandas dataframe (using read_csv or read_table), is there any way to guesstimate how much memory this will need? Just trying to get a better feel of data frames and memory...","AnswerCount":7,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":124007,"Q_Id":18089667,"Users Score":116,"Answer":"Here's a comparison of the different methods - sys.getsizeof(df) is simplest.\nFor this example, df is a dataframe with 814 rows, 11 columns (2 ints, 9 objects) - read from a 427kb shapefile\nsys.getsizeof(df)\n\n>>> import sys\n>>> sys.getsizeof(df)\n(gives results in bytes)\n462456\n\ndf.memory_usage()\n\n>>> df.memory_usage()\n...\n(lists each column at 8 bytes\/row)\n\n>>> df.memory_usage().sum()\n71712\n(roughly rows * cols * 8 bytes)\n\n>>> df.memory_usage(deep=True)\n(lists each column's full memory usage)\n\n>>> df.memory_usage(deep=True).sum()\n(gives results in bytes)\n462432\n\n\ndf.info()\nPrints dataframe info to stdout. Technically these are kibibytes (KiB), not kilobytes - as the docstring says, \"Memory usage is shown in human-readable units (base-2 representation).\" So to get bytes would multiply by 1024, e.g. 451.6 KiB = 462,438 bytes.\n\n>>> df.info()\n...\nmemory usage: 70.0+ KB\n\n>>> df.info(memory_usage='deep')\n...\nmemory usage: 451.6 KB","Q_Score":170,"Tags":"python,pandas","A_Id":47751572,"CreationDate":"2013-08-06T20:18:00.000","Title":"How to estimate how much memory a Pandas' DataFrame will need?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been wondering... If I am reading, say, a 400MB csv file into a pandas dataframe (using read_csv or read_table), is there any way to guesstimate how much memory this will need? Just trying to get a better feel of data frames and memory...","AnswerCount":7,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":124007,"Q_Id":18089667,"Users Score":10,"Answer":"Yes there is. Pandas will store your data in 2 dimensional numpy ndarray structures grouping them by dtypes. ndarray is basically a raw C array of data with a small header. So you can estimate it's size just by multiplying the size of the dtype it contains with the dimensions of the array.\nFor example: if you have 1000 rows with 2 np.int32 and 5 np.float64 columns, your DataFrame will have one 2x1000 np.int32 array and one 5x1000 np.float64 array which is:\n4bytes*2*1000 + 8bytes*5*1000 = 48000 bytes","Q_Score":170,"Tags":"python,pandas","A_Id":18089887,"CreationDate":"2013-08-06T20:18:00.000","Title":"How to estimate how much memory a Pandas' DataFrame will need?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a slight variant on the \"find k nearest neighbours\" algorithm which involves rejecting those that don't satisfy a certain condition and I can't think of how to do it efficiently. \nWhat I'm after is to find the k nearest neighbours that are in the current line of sight. Unfortunately scipy.spatial.cKDTree doesn't provide an option for searching with a filter to conditionally reject points.\nThe best algorithm I can come up with is to query for n nearest neighbours and if there aren't k that are in the line of sight then query it again for 2n nearest neighbours and repeat. Unfortunately this would mean recomputing the n nearest neighbours repeatedly in the worst cases. The performance hit gets worse the more times I have to repeat this query. On the other hand setting n too high is potentially wasteful if most of the points returned aren't needed.\nThe line of sight changes frequently so I can't recompute the cKDTree each time either. Any suggestions?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":631,"Q_Id":18144810,"Users Score":0,"Answer":"If you are looking for the neighbours in a line of sight, couldn't use an method like \n\ncKDTree.query_ball_point(self, x, r, p, eps)\n\nwhich allows you to query the KDTree for neighbours that are inside a radius of size r around the x array points.\nUnless I misunderstood your question, it seems that the line of sight is known and is equivalent to this r value.","Q_Score":4,"Tags":"python,constraints,nearest-neighbor,kdtree","A_Id":18339341,"CreationDate":"2013-08-09T10:36:00.000","Title":"nearest k neighbours that satisfy conditions (python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a numpy fft for a large number of samples. How do I reduce the resolution bandwidth, so that it will show me fewer frequency bins, with averaged power output?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":921,"Q_Id":18150150,"Users Score":1,"Answer":"The bandwidth of each FFT result bin is inversely proportional to the length of the FFT window. For a wider bandwidth per bin, use a shorter FFT. If you have more data, then Welch's method can be used with sequential STFT windows to get an average estimate.","Q_Score":0,"Tags":"python,numpy,fft","A_Id":18153590,"CreationDate":"2013-08-09T15:19:00.000","Title":"FFT resolution bandwidth","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large corpus of data (text) that I have converted to a sparse term-document matrix (I am using scipy.sparse.csr.csr_matrix to store sparse matrix). I want to find, for every document, top n nearest neighbour matches. I was hoping that NearestNeighbor routine in Python scikit-learn library (sklearn.neighbors.NearestNeighbor to be precise) would solve my problem, but efficient algorithms that use space partitioning data structures such as KD trees or Ball trees do not work with sparse matrices. Only brute-force algorithm works with sparse matrices (which is infeasible in my case as I am dealing with large corpus).\nIs there any efficient implementation of nearest neighbour search for sparse matrices (in Python or in any other language)?\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":3890,"Q_Id":18164348,"Users Score":3,"Answer":"You can try to transform your high-dimensional sparse data to low-dimensional dense data using TruncatedSVD then do a ball-tree.","Q_Score":9,"Tags":"python,scipy,scikit-learn,nearest-neighbor","A_Id":18201497,"CreationDate":"2013-08-10T17:07:00.000","Title":"Efficient nearest neighbour search for sparse matrices","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 200k lines list of number ranges like start_position,stop position.\nThe list includes all kinds of overlaps in addition to nonoverlapping ones.\nthe list looks like this\n\n[3,5] \n[10,30]\n[15,25]\n[5,15]\n[25,35]\n...\n\nI need to find the ranges that a given number fall in. And will repeat it for 100k numbers.\nFor example if 18 is the given number with the list above then the function should return\n[10,30]\n[15,25]\nI am doing it in a overly complicated way using bisect, can anybody give a clue on how to do it in a faster way.\nThanks","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3636,"Q_Id":18179680,"Users Score":0,"Answer":"How about,\n\nsort by first column O(n log n)\nbinary search to find indices that are out of range O(log n)\nthrow out values out of range\nsort by second column O(n log n)\nbinary search to find indices that are out of range O(log n)\nthrow out values out of range\nyou are left with the values in range\n\nThis should be O(n log n)\nYou can sort rows and cols with np.sort and a binary search should only be a few lines of code.\nIf you have lots of queries, you can save the first sorted copy for subsequent calls but not the second. Depending on the number of queries, it may turn out to be better to do a linear search than to sort then search.","Q_Score":6,"Tags":"python,algorithm","A_Id":18180322,"CreationDate":"2013-08-12T04:47:00.000","Title":"finding a set of ranges that a number fall in","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just installed ArcGIS v10.2 64bit background processing which installs Python 2.7.3 64bit and NumPy 1.6.1. I installed SciPy 0.12.0 64bit to the same Python installation.\nWhen I opened my Python interpreter I was able to successfully import arcpy, numpy, and scipy. However, when I tried to import scipy.ndimage I got an error that said numpy.core.multiarray failed to import. Everything I have found online related to this error references issues between scipy and numpy and suggest upgrading to numpy 1.6.1. I'm already at numpy 1.6.1.\nAny ideas how to deal with this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6801,"Q_Id":18282568,"Users Score":3,"Answer":"So it seems that the cause of the error was incompatibility between scipy 0.12.0 and the much older numpy 1.6.1.\nThere are two ways to fix this - either to upgrade numpy (to ~1.7.1) or to downgrade scipy (to ~0.10.1).\nIf ArcGIS 10.2 specifically requires Numpy 1.6.1, the easiest option is to downgrade scipy.","Q_Score":7,"Tags":"python,numpy,scipy","A_Id":18321537,"CreationDate":"2013-08-16T21:44:00.000","Title":"SciPy 0.12.0 and Numpy 1.6.1 - numpy.core.multiarray failed to import","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to open a csv file, select 1000 random rows and save those rows to a new file. I'm stuck and can't see how to do it. Can anyone help?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":11279,"Q_Id":18314913,"Users Score":-1,"Answer":"The basic procedure is this:\n1. Open the input file\nThis can be accomplished with the basic builtin open function.\n2. Open the output file\nYou'll probably use the same method that you chose in step #1, but you'll need to open the file in write mode.\n3. Read the input file to a variable\nIt's often preferable to read the file one line at a time, and operate on that one line before reading the next, but if memory is not a concern, you can also read the entire thing into a variable all at once.\n4. Choose selected lines\nThere will be any number of ways to do this, depending on how you did step #3, and your requirements. You could use filter, or a list comprehension, or a for loop with an if statement, etc. The best way depends on the particular constraints of your goal.\n5. Write the selected lines\nTake the selected lines you've chosen in step #4 and write them to the file.\n6. Close the files\nIt's generally good practice to close the files you've opened to prevent resource leaks.","Q_Score":0,"Tags":"python,random-sample","A_Id":18315125,"CreationDate":"2013-08-19T13:23:00.000","Title":"Selecting random rows with python and writing to a new file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using HoughLinesP raises \"<'unknown'> is not a numpy array\", but my array is really a numpy array.\nIt works on one of my computer, but not on my robot...","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":158,"Q_Id":18352493,"Users Score":0,"Answer":"For me, it wasn't working when the environment was of ROS Fuerte but it worked when the environment was of ROS Groovy. \nAs Alexandre had mentioned above, it must be the problem with the opencv2 versions. Fuerte had 2.4.2 while Groovy had 2.4.6","Q_Score":0,"Tags":"python,opencv","A_Id":21596301,"CreationDate":"2013-08-21T08:30:00.000","Title":"OpenCv2: Using HoughLinesP raises \" is not a numpy array\"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using HoughLinesP raises \"<'unknown'> is not a numpy array\", but my array is really a numpy array.\nIt works on one of my computer, but not on my robot...","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":158,"Q_Id":18352493,"Users Score":2,"Answer":"Found it: \nI don't have the same opencv version on my robot and on my computer !\nFor the records calling HoughLinesP:\n\nworks fine on 2.4.5 and 2.4.6\nleads to \" is not a numpy array\" with version $Rev: 4557 $","Q_Score":0,"Tags":"python,opencv","A_Id":18353075,"CreationDate":"2013-08-21T08:30:00.000","Title":"OpenCv2: Using HoughLinesP raises \" is not a numpy array\"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a directed graph in which I want to efficiently find a list of all K-th order neighbors of a node. K-th order neighbors are defined as all nodes which can be reached from the node in question in exactly K hops.\nI looked at networkx and the only function relevant was neighbors. However, this just returns the order 1 neighbors. For higher order, we need to iterate to determine the full set. I believe there should be a more efficient way of accessing K-th order neighbors in networkx.\nIs there a function which efficiently returns the K-th order neighbors, without incrementally building the set?\nEDIT: In case there exist other graph libraries in Python which might be useful here, please do mention those.","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":10646,"Q_Id":18393842,"Users Score":0,"Answer":"Yes,you can get a k-order ego_graph of a node\nsubgraph = nx.ego_graph(G,node,radius=k)\nthen neighbors are nodes of the subgraph\nneighbors= list(subgraph.nodes())","Q_Score":11,"Tags":"python,networkx,adjacency-list","A_Id":71715493,"CreationDate":"2013-08-23T02:45:00.000","Title":"K-th order neighbors in graph - Python networkx","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a directed graph in which I want to efficiently find a list of all K-th order neighbors of a node. K-th order neighbors are defined as all nodes which can be reached from the node in question in exactly K hops.\nI looked at networkx and the only function relevant was neighbors. However, this just returns the order 1 neighbors. For higher order, we need to iterate to determine the full set. I believe there should be a more efficient way of accessing K-th order neighbors in networkx.\nIs there a function which efficiently returns the K-th order neighbors, without incrementally building the set?\nEDIT: In case there exist other graph libraries in Python which might be useful here, please do mention those.","AnswerCount":6,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":10646,"Q_Id":18393842,"Users Score":27,"Answer":"You can use:\nnx.single_source_shortest_path_length(G, node, cutoff=K)\nwhere G is your graph object.","Q_Score":11,"Tags":"python,networkx,adjacency-list","A_Id":21031826,"CreationDate":"2013-08-23T02:45:00.000","Title":"K-th order neighbors in graph - Python networkx","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a text file which is 10k lines long and I need to build a function to extract 10 random lines each time from this file. I already found how to generate random numbers in Python with numpy and also how to open a file but I don't know how to mix it all together. Please help.","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":3957,"Q_Id":18455589,"Users Score":1,"Answer":"It is possible to do the job with one pass and without loading the entire file into memory as well. Though the code itself is going to be much more complicated and mostly unneeded unless the file is HUGE. \nThe trick is the following:\nSuppose we only need one random line, then first save first line into a variable, then for ith line, replace the currently with probability 1\/i. Return the saved line when reaching end of file. \nFor 10 random lines, then have an list of 10 element and do the process 10 times for each line in the file.","Q_Score":4,"Tags":"python,numpy","A_Id":18456597,"CreationDate":"2013-08-27T01:35:00.000","Title":"Retrieve 10 random lines from a file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating a GUI program using wxPython. I am also using matplotlib to graph some data. This data needs to be animated. To animate the data I am using the FuncAnimate function, which is part of the matplotlib package.\nWhen I first started to write my code I was using a PC, running windows 7. I did my initial testing on this computer and everything was working fine. However my program needs to be cross platform. So I began to run some test using a Mac. This is where I began to encounter an error. As I explained before, in my code I have to animate some data. I programmed it such that the user has the ability to play and pause the animation. Now when the user pauses the animation I get the following error: AttributeError: 'FigureCanvasWxAgg' object has no attribute '_idletimer'. Now I find this to be very strange because like I said I ran this same code on a PC and never got this error. \nI was wondering if anyone could explain to me what is meant by this _idletimer error and what are possible causes for this.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":287,"Q_Id":18472394,"Users Score":1,"Answer":"_idletimer is likely to be a private, possibly implementation specific member of one of the classes - since you do not include the code or context I can not tell you which.\nIn general anything that starts with an _ is private and if it is not your own, and specific to the local class, should not be used by your code as it may change or even disappear when you rely on it.","Q_Score":1,"Tags":"python,macos,matplotlib,wxpython","A_Id":18474882,"CreationDate":"2013-08-27T17:55:00.000","Title":"AttributeError: 'FigureCanvasWxAgg' object has no attribute '_idletimer'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im wanting to use imshow() to create an image of a 2D histogram. However on several of the examples ive seen the 'extent' is defined. What does 'extent' actually do and how do you choose what values are appropriate?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":128,"Q_Id":18511206,"Users Score":2,"Answer":"Extent defines the images max and min of the horizontal and vertical values. It takes four values like so: extent=[horizontal_min,horizontal_max,vertical_min,vertical_max].","Q_Score":1,"Tags":"python,matplotlib,histogram2d","A_Id":18511409,"CreationDate":"2013-08-29T12:36:00.000","Title":"What does extent do within imshow()?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been running sci-kit learn's DBSCAN implementation to cluster a set of geotagged photos by lat\/long. For the most part, it works pretty well, but I came across a few instances that were puzzling. For instance, there were two sets of photos for which the user-entered text field specified that the photo was taken at Central Park, but the lat\/longs for those photos were not clustered together. The photos themselves confirmed that they both sets of observations were from Central Park, but the lat\/longs were in fact further apart than epsilon.\nAfter a little investigation, I discovered that the reason for this was because the lat\/long geotags (which were generated from the phone's GPS) are pretty imprecise. When I looked at the location accuracy of each photo, I discovered that they ranged widely (I've seen a margin of error of up to 600 meters) and that when you take the location accuracy into account, these two sets of photos are within a nearby distance in terms of lat\/long.\nIs there any way to account for margin of error in lat\/long when you're doing DBSCAN? \n(Note: I'm not sure if this question is as articulate as it should be, so if there's anything I can do to make it more clear, please let me know.)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":723,"Q_Id":18519356,"Users Score":1,"Answer":"Note that DBSCAN doesn't actually need the distances.\nLook up Generalized DBSCAN: all it really uses is a \"is a neighbor of\" relationship.\nIf you really need to incorporate uncertainty, look up the various DBSCAN variations and extensions that handle imprecise data explicitely. However, you may get pretty much the same results just by choosing a threshold for epsilon that is somewhat reasonable. There is room for choosing a larger epsilon that the one you deem adequate: if you want to use epsilon = 1km, and you assume your data is imprecise on the range of 100m, then use 1100m as epsilon instead.","Q_Score":0,"Tags":"python,algorithm,cluster-analysis,data-mining,dbscan","A_Id":18537377,"CreationDate":"2013-08-29T19:19:00.000","Title":"DBSCAN with potentially imprecise lat\/long coordinates","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"OpenCV2 for python have 2 function\n\n[Function 1]\n\nPython: cv2.ellipse(img, center, axes, angle, startAngle, endAngle, color[, thickness[, lineType[, shift]]]) \u2192 None\n\n[Function 2]\n\nPython: cv2.ellipse(img, box, color[, thickness[, lineType]]) \u2192 None\n\n\nI want to use [Function 1]\nBut when I use this Code\n\ncv2.ellipse(ResultImage, Circle, Size, Angle, 0, 360, Color, 2, cv2.CV_AA, 0)\n\nIt raise\n\nTypeError: ellipse() takes at most 5 arguments (10 given)\n\n\nCould you help me?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":13427,"Q_Id":18595099,"Users Score":0,"Answer":"these parameters should be integer, or it will raise TypeError","Q_Score":14,"Tags":"python,opencv","A_Id":20780282,"CreationDate":"2013-09-03T14:39:00.000","Title":"How I can use cv2.ellipse?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"OpenCV2 for python have 2 function\n\n[Function 1]\n\nPython: cv2.ellipse(img, center, axes, angle, startAngle, endAngle, color[, thickness[, lineType[, shift]]]) \u2192 None\n\n[Function 2]\n\nPython: cv2.ellipse(img, box, color[, thickness[, lineType]]) \u2192 None\n\n\nI want to use [Function 1]\nBut when I use this Code\n\ncv2.ellipse(ResultImage, Circle, Size, Angle, 0, 360, Color, 2, cv2.CV_AA, 0)\n\nIt raise\n\nTypeError: ellipse() takes at most 5 arguments (10 given)\n\n\nCould you help me?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":13427,"Q_Id":18595099,"Users Score":7,"Answer":"Make sure all the ellipse parameters are int otherwise it raises \"TypeError: ellipse() takes at most 5 arguments (10 given)\". Had the same problem and casting the parameters to int, fixed it.\nPlease note that in Python, you should round the number first and then use int(), since int function will cut the number:\nx = 2.7 , int(x) will be 2 not 3","Q_Score":14,"Tags":"python,opencv","A_Id":28592694,"CreationDate":"2013-09-03T14:39:00.000","Title":"How I can use cv2.ellipse?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working a lot with pytables and HDF5 data and I have a question regarding the attributes of nodes (the attributes you access via pytables 'node._v_attrs' property).\nAssume that I set such an attribute of an hdf5 node. I do that over and over again, setting a particular attribute\n(1) always to the same value (so overall the value stored in the hdf5file does not change qualitatively)\n(2) always with a different value\nHow are these operations in terms of speed and memory? What I mean is the following, does setting the attribute really imply deletion of the attribute in the hdf5 file and adding a novel attribute with the same name as before? If so, does that mean every time I reset an existing attribute the size of the hdf5 file is slightly increased and keeps slowly growing until my hard disk is full?\nIf this is true, would it be more beneficial to check before I reset whether I have case (1) [and I should not store at all but compare data to the attribute written on disk] and only reassign if I face case (2) [i.e. the attribute value in the hdf5file is not the one I want to write to the hdf5 file].\nThanks a lot and best regards,\nRobert","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":986,"Q_Id":18638461,"Users Score":3,"Answer":"HDF5 attribute access is notoriously slow. HDF5 is really built for and around the array data structure. Things like groups and attributes are great helpers but they are not optimized. \nThat said while attribute reading is slow, attribute writing is even slower. Therefore, it is always worth the extra effort to do what you suggest. Check if the attribute exists and if it has the desired value before writing it. This should give you a speed boost as compared to just writing it out every time. \nLuckily, the effect on memory of attributes -- both on disk and in memory -- is minimal. This is because ALL attributes on a node fit into 64 kb of special metadata space. If you try to write more than 64 kb worth of attributes, HDF5 and PyTables will fail.\nI hope this helps.","Q_Score":0,"Tags":"python,hdf5,pytables","A_Id":18641432,"CreationDate":"2013-09-05T14:02:00.000","Title":"Pytables, HDF5 Attribute setting and deletion,","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Enthought canopy for data analysis. I didn't find any option to create a .py file to write my code and save it for later use. I tried File> New >IPython Notebook, wrote my code and saved it. But the next time I opened it within Canopy editor, it wasn't editable. I need something like a Python shell where you just open a 'New Window', write all your code and press F5 to execute it. It could be saved for later use as well. Although pandas and numpy work in canopy editor, they are not recognized by Python shell (whenever I write import pandas as pd, it says no module named pandas). Please help.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1009,"Q_Id":18646039,"Users Score":1,"Answer":"Umair, ctrl + n or File > Python File will do what you want.\nBest, \nJonathan","Q_Score":0,"Tags":"python,enthought","A_Id":18647689,"CreationDate":"2013-09-05T21:12:00.000","Title":"How to create a .py file within canopy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Enthought canopy for data analysis. I didn't find any option to create a .py file to write my code and save it for later use. I tried File> New >IPython Notebook, wrote my code and saved it. But the next time I opened it within Canopy editor, it wasn't editable. I need something like a Python shell where you just open a 'New Window', write all your code and press F5 to execute it. It could be saved for later use as well. Although pandas and numpy work in canopy editor, they are not recognized by Python shell (whenever I write import pandas as pd, it says no module named pandas). Please help.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1009,"Q_Id":18646039,"Users Score":1,"Answer":"Let me add that if you need to open the file, even if it's a text file but you want to be able to run it as a Python file (or whatever language format) just look at the bottom of the Canopy window and select the language you want to use. In some cases it may default to just text. Click it and select the language you want. Once you've done that, you'll see that the run button will be active and the command appear in their respective color.","Q_Score":0,"Tags":"python,enthought","A_Id":19578868,"CreationDate":"2013-09-05T21:12:00.000","Title":"How to create a .py file within canopy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to generate a various points and its xyz coordinates and then calculate the distance.\n Lets say I want create random point coordinates from point a away from 2.5 cm in all directions. so that i can calculate the mutual distance and angles form a to all generated point (red)\nI want to remove the redundant point and all those points which do not satisfy my criteria and also have same position.\n![enter image description here][1]\nfor example, I know the coordinates for the two points a (-10, 12, 2) and b (-9, 11, 5). The distance between a and b is 5 cm.\nThe question is: How can I generate the red points' coordinates. I knew how to calculate the distance and angle. So far, I have tried the following calculation:\nI am not able to define the points randomly.\nI found a few solutions that don't work.\nAny help would be appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":5337,"Q_Id":18670974,"Users Score":1,"Answer":"Generically you can populate your points in two ways:\n1) use random to create the coordinates for your points within the outer bounds of the solution. If a given random point falls outside the max or inside the inner limit.\n2) You can do it using polar coordinates: generate a random distance between the inner and outer bound and a yaw rotation. In 3d, you'd have to use two rotations, one for yaw and another for pitch. This avoids the need for rejecting points.\nYou can simplify the code for both by generating all the points in a circle (or sphere) around origin (0,0,0) instead of in place. Them move the whole set of points to the correct blue circle location by adding it's position to the position of each point.","Q_Score":3,"Tags":"python","A_Id":18687144,"CreationDate":"2013-09-07T07:35:00.000","Title":"How to get a series of random points in a specific range of distances from a reference point and generate xyz coordinates","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to perform clustering in Python using Random Forests. In the R implementation of Random Forests, there is a flag you can set to get the proximity matrix. I can't seem to find anything similar in the python scikit version of Random Forest. Does anyone know if there is an equivalent calculation for the python version?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6625,"Q_Id":18703136,"Users Score":19,"Answer":"We don't implement proximity matrix in Scikit-Learn (yet). \nHowever, this could be done by relying on the apply function provided in our implementation of decision trees. That is, for all pairs of samples in your dataset, iterate over the decision trees in the forest (through forest.estimators_) and count the number of times they fall in the same leaf, i.e., the number of times apply give the same node id for both samples in the pair. \nHope this helps.","Q_Score":14,"Tags":"python,scikit-learn,random-forest","A_Id":18719287,"CreationDate":"2013-09-09T16:49:00.000","Title":"Proximity Matrix in sklearn.ensemble.RandomForestClassifier","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm testing some things in image retrival and i was thinking about how to sort out bad pictures of a dataset. For e.g there are only pictures of houses and in between there is a picture of people and some of cars. So at the end i want to get only the houses.\nAt the Moment my approach looks like:\n\ncomputing descriptors (Sift) of all pictures\nclustering all descriptors with k-means\ncreating histograms of the pictures by computing the euclidean distance between the cluster centers and the descriptors of a picture\nclustering the histograms again.\n\nat this moment i have got a first sort (which isn't really good). Now my Idea is to take all pictures which are clustered to a center with len(center) > 1 and cluster them again and again. So the Result is that the pictures which are particular in a center will be sorted out. Maybe its enough to fit the result again to the same k-means without clustering again?!\nthe result isn't satisfying so maybe someone has got a good idea.\nFor Clustering etc. I'm using k-means of scikit learn.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":550,"Q_Id":18721204,"Users Score":1,"Answer":"K-means is not very robust to noise; and your \"bad pictures\" probably can be considered as such. Furthermore, k-means doesn't work too well for sparse data; as the means will not be sparse.\nYou may want to try other, more modern, clustering algorithms that can handle this situation much better.","Q_Score":0,"Tags":"python,computer-vision,cluster-analysis,scikit-learn,k-means","A_Id":18735714,"CreationDate":"2013-09-10T14:09:00.000","Title":"Sort out bad pictures of a dataset (k-means, clustering, sklearn)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm testing some things in image retrival and i was thinking about how to sort out bad pictures of a dataset. For e.g there are only pictures of houses and in between there is a picture of people and some of cars. So at the end i want to get only the houses.\nAt the Moment my approach looks like:\n\ncomputing descriptors (Sift) of all pictures\nclustering all descriptors with k-means\ncreating histograms of the pictures by computing the euclidean distance between the cluster centers and the descriptors of a picture\nclustering the histograms again.\n\nat this moment i have got a first sort (which isn't really good). Now my Idea is to take all pictures which are clustered to a center with len(center) > 1 and cluster them again and again. So the Result is that the pictures which are particular in a center will be sorted out. Maybe its enough to fit the result again to the same k-means without clustering again?!\nthe result isn't satisfying so maybe someone has got a good idea.\nFor Clustering etc. I'm using k-means of scikit learn.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":550,"Q_Id":18721204,"Users Score":1,"Answer":"I don't have the solution to your problem but here is a sanity check to perform prior to the final clustering, to check that the kind of features you extracted is suitable for your problem:\n\nextract the histogram features for all the pictures in your dataset\ncompute the pairwise distances of all the pictures in your dataset using the histogram features (you can use sklearn.metrics.pairwise_distance)\n\nnp.argsort the raveled distances matrix to find the indices of the 20 top closest pairs of distinct pictures according to your features (you have to filter out the zero-valued diagonal elements of the distance matrix) and do the same to extract the top 20 most farest pairs of pictures based on your histogram features.\nVisualize (for instance with plt.imshow) the pictures of top closest pairs and check that they are all pairs that you would expect to be very similar.\nVisualize the pictures of the top farest pairs and check that they are all very dissimilar.\nIf one of those 2 checks fails, then it means that histogram of bag of SIFT words is not suitable to your task. Maybe you need to extract other kinds of features (e.g. HoG features) or reorganized the way your extract the cluster of SIFT descriptors, maybe using a pyramidal pooling structure to extract info on the global layout of the pictures at various scales.","Q_Score":0,"Tags":"python,computer-vision,cluster-analysis,scikit-learn,k-means","A_Id":18735840,"CreationDate":"2013-09-10T14:09:00.000","Title":"Sort out bad pictures of a dataset (k-means, clustering, sklearn)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am doing some animating plots with ion()function. I want to draw and delete some lines. I found out axvspan() function, I can plot the lines and shapes with it as I want. But as long as I am doing an animation I also want to delete that lines and shapes. I couldn't find a way to delete them.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":159,"Q_Id":18743895,"Users Score":0,"Answer":"Ok I have found the necessary functions. I used dir() function to find methods. axvspan() returns a matplotlib.patches.Polygon result. This type of data has set_visible method, using it as x.set_visible(0) I removed the lines and shapes.","Q_Score":0,"Tags":"python,matplotlib,interactive-mode","A_Id":18815103,"CreationDate":"2013-09-11T14:26:00.000","Title":"Deleting Lines of a Plot Which has Plotted with axvspan()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently, I have been studying OpenCV to detect and recognize faces using C++. In order to execute source code demonstration from the OpenCV website I need to run Python to crop image first. Unfortunately, the message error is 'ImportError: No module named Image' when I run the Python script (this script is provided by OpenCV website). I installed \"python-2.7.amd64\" and downloaded \"PIL-1.1.7.win32-py2.7\" to install Image library. However, the message error is 'Python version 2.7 required, which was not found in the registry'. And then, I downloaded the script written by Joakim L\u00f6w for Secret Labs AB \/ PythonWare to register registry in my computer. But the message error is \"Unable to register. You probably have the another Python installation\".\nI spent one month to search this issue on the internet but I cannot find the answer. Please support me to resolve my issue. \nThanks,\nTran Dang Bao","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":22097,"Q_Id":18776988,"Users Score":1,"Answer":"Try to put the python(2.7) at your Windows path.\nDo the following steps:\n\nOpen System Properties (Win+Pause) or My Computer and right-click then Properties\nSwitch to the Advanced tab\nClick Environment Variables\nSelect PATH in the System variables section\nClick Edit\nAdd python's path to the end of the list (the paths are separated by semicolons).\nexample C:\\Windows;C:\\Windows\\System32;C:\\Python27","Q_Score":3,"Tags":"python,windows,opencv,installation","A_Id":18777073,"CreationDate":"2013-09-13T01:48:00.000","Title":"Python 2.7 - ImportError: No module named Image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to make a very simple image that will illustrate a cash flow diagram based on user input. Basically, I just need to make an axis and some arrows facing up and down and proportional to the value of the cash flow. I would like to know how to do this with matplot.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":875,"Q_Id":18778266,"Users Score":0,"Answer":"If you simply need arrows pointing up and down, use Unicode arrows like \"\u2191\" and \"\u2193\". This would be really simple if rendering in a browser.","Q_Score":0,"Tags":"python,image,matplotlib","A_Id":18778542,"CreationDate":"2013-09-13T04:21:00.000","Title":"Cash flow diagram in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to plot a heatmap of a big microarray dataset (45K rows per 446 columns).\nUsing pcolor from matplotlib I am unable to do it because my pc goes easily out of memory (more than 8G)..\nI'd prefer to use python\/matplotlib instead of R for personal opinion..\nAny way to plot heatmaps in an efficient way?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1446,"Q_Id":18791469,"Users Score":0,"Answer":"I solved by downsampling the matrix to a smaller matrix.\nI decided to try two methodologies:\n\nsupposing I want to down-sample a matrix of 45k rows to a matrix of 1k rows, I took a row value every 45 rows\nanother methodology is, to down-sample 45k rows to 1k rows, to group the 45k rows into 1k groups (composed by 45 adjacent rows) and to take the average for each group as representative row\n\nHope it helps.","Q_Score":1,"Tags":"python,matplotlib,bigdata,heatmap","A_Id":36248111,"CreationDate":"2013-09-13T16:55:00.000","Title":"How to plot a heatmap of a big matrix with matplotlib (45K * 446)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some very large IPython (1.0) notebooks, which I find very unhandy to work with. I want to split the large notebook into several smaller ones, each covering a specific part of my analysis. However, the notebooks need to share data and (unpickleable) objects.\nNow, I want these notebooks to connect to the same kernel. How do I do this? How can I change the kernel to which a notebook is connected? (And any ideas how to automate this step?)\nI don't want to use the parallel computing mechanism (which would be a trivial solution), because it would add much code overhead in my case.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1689,"Q_Id":18882510,"Users Score":1,"Answer":"When I have a long noetbook, I create functions from my code, and hide it into python modules, which I then import in the notebook.\nSo that I can have huge chunk of code hidden on the background, and my notebook smaller for handier manipulation.","Q_Score":6,"Tags":"ipython,ipython-notebook","A_Id":35315482,"CreationDate":"2013-09-18T21:17:00.000","Title":"How to share Ipython notebook kernels?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In wakari, how do I download a CSV file and create a new CSV file with each of the rows in the original file repeated N number of times in the new CSV file.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":18886383,"Users Score":0,"Answer":"cat dataset.csv dataset.csv dataset.csv dataset.csv > bigdata.csv","Q_Score":0,"Tags":"python","A_Id":18897337,"CreationDate":"2013-09-19T04:49:00.000","Title":"Repeat rows in files in wakari","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After only briefly looking at numpy arrays, I don't understand how they are different than normal Python lists. Can someone explain the difference, and why I would use a numpy array as opposed to a list?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2487,"Q_Id":18907998,"Users Score":0,"Answer":"Numpy is an extension, and demands that all the objects on it are of the same type , defined on creation. It also provides a set of linear algebra operations. Its more like a mathematical framework for python to deal with Numeric Calculations (matrix, n stuffs).","Q_Score":1,"Tags":"python,arrays,list,numpy,multidimensional-array","A_Id":18908045,"CreationDate":"2013-09-20T02:37:00.000","Title":"Difference between a numpy array and a multidimensional list in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"to provide some context: Issues in an application are logged in an excel sheet and one of the columns in that sheet contains the email communication between the user (who had raised the issue) and the resolve team member. There are bunch of other columns containing other useful information. My job is to find useful insights from this data for Business.\n\nFind out what type of issue was that? e.g. was that a training issue for the user or access issue etc. This would mean that I analyze the mail text and figure out by some means the type of issue.\nHow many email conversations have happened for one issue?\nIs it a repeat issue?\nThere are other simple statistical problems e.g. How many issues per week etc...\n\nI read that NLP with Python can be solution to my problems. I also looked at Rapidminer for the same.\nNow my Question is \na. \"Am I on the right track?, Is NLP(Natural Language Processing) the solution to these problems?\"\nb. If yes, then how to start.. I have started reading book on NLP with Python, but that is huge, any specific areas that I should concentrate on and can start my analysis?\nc. How is Rapidminer tool? Can it answer all of these questions? The data volume is not too huge (may be 100000 rows)... looks like it is quite easy to build a process in rapidminer, hence started on it...\nAppreciate any suggestions!!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":637,"Q_Id":18910200,"Users Score":0,"Answer":"Try xlrd Python Module to read and process excel sheets.\nI think an appropriate implementation using this module is an easy way to solve your problem.","Q_Score":0,"Tags":"python,nlp","A_Id":18910584,"CreationDate":"2013-09-20T06:26:00.000","Title":"Analyze Text to find patterns and useful information","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"While I am trying to install scikit-learn for my portable python, its saying \" Python 2.7 is not found in the registry\". In the next window, it does ask for an installation path but neither am I able to copy-paste the path nor write it manually. Otherwise please suggest some other alternative for portable python which has numpy, scipy and scikit-learn by default. Please note that I don't have administrative rights of the system so a portable version is preferred.","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1485,"Q_Id":18973863,"Users Score":2,"Answer":"you can easily download SciKit executable, extract it with python, copy SciKit folder and content to c:\\Portable Python 2.7.5.1\\App\\Lib\\site-packages\\ and you'll have SciKit in your portable python.\nI just had this problem and solved this way.","Q_Score":2,"Tags":"python-2.7,scikit-learn,portable-python","A_Id":20853862,"CreationDate":"2013-09-24T05:49:00.000","Title":"How to install scikit-learn for Portable Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running python 2.7.5 with scikit_learn-0.14 on my Mac OSX Mountain Lion.\nEverything I run a svmlight command however, I get the following warning:\n\nDeprecationWarning: using a non-integer number instead of an integer will result in an error >in the future","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34473,"Q_Id":18994787,"Users Score":0,"Answer":"I also met this problem when I assigned numbers to a matrix.\nlike this:\nQmatrix[list2[0], list2[j]] = 1\nthe component may be a non-integer number, so I changed to this:\nQmatrix[int(list2[0]), int(list2[j])] = 1\nand the warning removed","Q_Score":12,"Tags":"python-2.7,scipy,svmlight","A_Id":44480375,"CreationDate":"2013-09-25T01:33:00.000","Title":"Python Svmlight Error: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm completely new to Python and want to use it for data analysis. I just installed Python 2.7 on my mac running OSX 10.8. I need the NumPy, SciPy, matplotlib and csv packages. I read that I could simply install the Anaconda package and get all in one. So I went ahead and downloaded\/installed Anaconda 1.7.\nHowever, when I type in:\n import numpy as np\nI get an error telling me that there is no such module. I assume this has to do with the location of the installation, but I can't figure out how to:\nA. Check that everything is actually installed properly\nB. Check the location of the installation.\nAny pointers would be greatly appreciated!\nThanks","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":69121,"Q_Id":19029333,"Users Score":0,"Answer":"I don't think the existing answer answers your specific question (about installing packages within Anaconda). When I install a new package via conda install , I then run conda list to ensure the package is now within my list of Anaconda packages.","Q_Score":21,"Tags":"python,macos,numpy,installation,anaconda","A_Id":38722056,"CreationDate":"2013-09-26T13:14:00.000","Title":"How to check that the anaconda package was properly installed","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm completely new to Python and want to use it for data analysis. I just installed Python 2.7 on my mac running OSX 10.8. I need the NumPy, SciPy, matplotlib and csv packages. I read that I could simply install the Anaconda package and get all in one. So I went ahead and downloaded\/installed Anaconda 1.7.\nHowever, when I type in:\n import numpy as np\nI get an error telling me that there is no such module. I assume this has to do with the location of the installation, but I can't figure out how to:\nA. Check that everything is actually installed properly\nB. Check the location of the installation.\nAny pointers would be greatly appreciated!\nThanks","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":69121,"Q_Id":19029333,"Users Score":1,"Answer":"Though the question is not relevant to Windows environment, FYI for windows. In order to use anaconda modules outside spyder or in cmd prompt, try to update the PYTHONPATH & PATH with C:\\Users\\username\\Anaconda3\\lib\\site-packages.\nFinally, restart the command prompt.\nAdditionally, sublime has a plugin 'anaconda' which can be used for sublime to work with anaconda modules.","Q_Score":21,"Tags":"python,macos,numpy,installation,anaconda","A_Id":41600022,"CreationDate":"2013-09-26T13:14:00.000","Title":"How to check that the anaconda package was properly installed","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to optimize functions with GPU calculation in Python, so I prefer to store all my data as ndarrays with dtype=float32. \nWhen I am using scipy.optimize.fmin_l_bfgs_b, I notice that the optimizer always passes a float64 (on my 64bit machine) parameter to my objective and gradient functions, even when I pass a float32 ndarray as the initial search point x0. This is different when I use the cg optimizer scipy.optimize.fmin_cg, where when I pass in a float32 array as x0, the optimizer will use float32 in all consequent objective\/gradient function invocations. \nSo my question is: can I enforce scipy.optimize.fmin_l_bfgs_b to optimize on float32 parameters like in scipy.optimize.fmin_cg?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1678,"Q_Id":19041486,"Users Score":1,"Answer":"I am not sure you can ever do it. fmin_l_bfgd_b is provided not by pure python code, but by a extension (a wrap of FORTRAN code). In Win32\/64 platform it can be found at \\scipy\\optimize\\_lbfgsb.pyd. What you want may only be possible if you can compile the extension differently or modify the FORTRAN code. If you check that FORTRAN code, it has double precision all over the place, which is basically float64. I am not sure just changing them all to single precision will do the job.\nAmong the other optimization methods, cobyla is also provided by FORTRAN. Powell's methods too.","Q_Score":1,"Tags":"python,optimization,scipy,gpu,multidimensional-array","A_Id":19042578,"CreationDate":"2013-09-27T01:53:00.000","Title":"How to enforce scipy.optimize.fmin_l_bfgs_b to use 'dtype=float32'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In matplotlib, how do I specify the line width and color of a legend frame?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":13797,"Q_Id":19058485,"Users Score":20,"Answer":"For the width: legend.get_frame().set_linewidth(w)\nFor the color: legend.get_frame().set_edgecolor(\"red\")","Q_Score":19,"Tags":"python,matplotlib","A_Id":28295797,"CreationDate":"2013-09-27T19:18:00.000","Title":"Specifying the line width of the legend frame, in matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pandas dataframe with the following column names:\nResult1, Test1, Result2, Test2, Result3, Test3, etc...\nI want to drop all the columns whose name contains the word \"Test\". The numbers of such columns is not static but depends on a previous function.\nHow can I do that?","AnswerCount":11,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":184940,"Q_Id":19071199,"Users Score":6,"Answer":"This method does everything in place. Many of the other answers create copies and are not as efficient:\ndf.drop(df.columns[df.columns.str.contains('Test')], axis=1, inplace=True)","Q_Score":184,"Tags":"python,pandas,dataframe","A_Id":61194900,"CreationDate":"2013-09-28T20:10:00.000","Title":"Drop columns whose name contains a specific string from pandas DataFrame","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using scipy.cluster.hierarchy as sch to draw a dendogram after makeing an hierarchical clustering. The problem is that the clustering happens on the top of the dendogram in between 0.8 and 1.0 which is the similarity degree in the y axis. How can i \"cut\" all the graph from 0 to 0.6 where nothing \"interesting\" graphically is happening?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":362,"Q_Id":19088527,"Users Score":0,"Answer":"If your're really only interested in distance proportions between the fusions, you could \n\nadapt your input linkage (cut off an offset in the third column of the linkage matrix). This will screw the absolute cophenetic distances of course. \ndo some normalization of your input data, before clustering it\n\nOr you\n\nmanipulate the dendrogram axes \/ adapt limits (I didn't try that)","Q_Score":2,"Tags":"python,scipy,hierarchical-clustering,dendrogram","A_Id":21080034,"CreationDate":"2013-09-30T07:19:00.000","Title":"How to resize y axis of a dendogram","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to figure out the fastest method to find the determinant of sparse symmetric and real matrices in python. using scipy sparse module but really surprised that there is no determinant function. I am aware I could use LU factorization to compute determinant but don't see a easy way to do it because the return of scipy.sparse.linalg.splu is an object and instantiating a dense L and U matrix is not worth it - I may as well do sp.linalg.det(A.todense()) where A is my scipy sparse matrix. \nI am also a bit surprised why others have not faced the problem of efficient determinant computation within scipy. How would one use splu to compute determinant? \nI looked into pySparse and scikits.sparse.chlmod. The latter is not practical right now for me - needs package installations and also not sure sure how fast the code is before I go into all the trouble. \nAny solutions? Thanks in advance.","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":6098,"Q_Id":19107617,"Users Score":6,"Answer":"The \"standard\" way to solve this problem is with a cholesky decomposition, but if you're not up to using any new compiled code, then you're out of luck. The best sparse cholesky implementation is Tim Davis's CHOLMOD, which is licensed under the LGPL and thus not available in scipy proper (scipy is BSD).","Q_Score":26,"Tags":"python,numpy,scipy,linear-algebra,sparse-matrix","A_Id":19616987,"CreationDate":"2013-10-01T03:48:00.000","Title":"How to compute scipy sparse matrix determinant without turning it to dense?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have raw-rgb video coming from PAL 50i camera. How can I detect the start of frame, just like I would detect the keyframe of h264 video, in gstreamer? I would like to do that for indexing\/cutting purposes.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":234,"Q_Id":19130365,"Users Score":1,"Answer":"If this really is raw rgb video, there is no (realistic) way to detect the start of the frame. I would assume your video would come as whole frames, so one buffer == one frame, and hence no need for such detection.","Q_Score":0,"Tags":"python,video,rgb,gstreamer","A_Id":19220952,"CreationDate":"2013-10-02T05:12:00.000","Title":"How to detect start of raw-rgb video frame?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to compute massive similarity computations between vectors in a sparse matrix. What is currently the best tool, scipy-sparse or pandas, for this task?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":839,"Q_Id":19171822,"Users Score":1,"Answer":"After some research I found that both pandas and Scipy have structures to represent sparse matrix efficiently in memory. But none of them have out of box support for compute similarity between vectors like cosine, adjusted cosine, euclidean etc. Scipy support this on dense matrix only. For sparse, Scipy support dot products and others linear algebra basic operations.","Q_Score":2,"Tags":"python,numpy,matrix,pandas","A_Id":19389797,"CreationDate":"2013-10-04T01:40:00.000","Title":"Scipy or pandas for sparse matrix computations?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a continuous random variable given by its density distribution function or by cumulative probability distribution function.\nThe distribution functions are not analytical. They are given numerically (for example as a list of (x,y) values).\nOne of the things that I would like to do with these distributions is to find a convolution of two of them (to have a distribution of a sum of two random properties).\nI do not want to write my own function for that if there is already something standard and tested. Does anybody know if it is the case?","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":287,"Q_Id":19184975,"Users Score":4,"Answer":"How about numpy.convolve? It takes two arrays, rather than two functions, which seems ideal for your use. I'll also mention the ECDF function in the statsmodels package in case you really want to turn your observations into (step) functions.","Q_Score":3,"Tags":"python,random,numpy,scipy,distribution","A_Id":19185124,"CreationDate":"2013-10-04T15:22:00.000","Title":"Is there a standard way to work with numerical probability density functions in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a JPG image, and I would like to find a way to:\n\nDecompose the image into red, green and blue intensity layers (8 bit per channel).\nColorise each of these now 'grayscale' images with its appropriate color\nProduce 3 output images in appropriate color, of each channel.\n\nFor example if I have an image:\ndog.jpg\nI want to produce:\ndog_blue.jpg dog_red.jpg and dog_green.jpg\nI do not want grayscale images for each channel. I want each image to be represented by its correct color.\nI have managed to use the decompose function in gimp to get the layers, but each one is grayscale and I can't seem to add color to it.\nI am currently using OpenCV and Python bindings for other projects so any suitable code that side may be useful if it is not easy to do with gimp","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2596,"Q_Id":19213407,"Users Score":0,"Answer":"As the blue,green,red images each has 1 channel only.So, this is basically a gray-scale image.\nIf you want to add colors in the dog_blue.jpg for example then you create a 3-channel image and copy the contents in all the channels or do cvCvtColor(src,dst,CV_GRAY2BGR). Now you will be able to add colors to it as it has become 3-channel image.","Q_Score":1,"Tags":"python,opencv,rgb,gimp","A_Id":19217476,"CreationDate":"2013-10-06T20:09:00.000","Title":"How do I use Gimp \/ OpenCV Color to separate images into coloured RGB layers?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a JPG image, and I would like to find a way to:\n\nDecompose the image into red, green and blue intensity layers (8 bit per channel).\nColorise each of these now 'grayscale' images with its appropriate color\nProduce 3 output images in appropriate color, of each channel.\n\nFor example if I have an image:\ndog.jpg\nI want to produce:\ndog_blue.jpg dog_red.jpg and dog_green.jpg\nI do not want grayscale images for each channel. I want each image to be represented by its correct color.\nI have managed to use the decompose function in gimp to get the layers, but each one is grayscale and I can't seem to add color to it.\nI am currently using OpenCV and Python bindings for other projects so any suitable code that side may be useful if it is not easy to do with gimp","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2596,"Q_Id":19213407,"Users Score":0,"Answer":"In the BGR image, you have three channel. When you split the channel using the split() function, like B,G,R=cv2.split(img), then B,G,R becomes a single or monochannel image. So you need to add two extra channel with zeros to make it 3 channel image but activated for a specific color channel.","Q_Score":1,"Tags":"python,opencv,rgb,gimp","A_Id":55448010,"CreationDate":"2013-10-06T20:09:00.000","Title":"How do I use Gimp \/ OpenCV Color to separate images into coloured RGB layers?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Have anyone known the total columns in pandas, python? \nI have just created a dataframe for pandas included more than 20,000 columns but I got memory error.\nThanks a lot","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":2587,"Q_Id":19221694,"Users Score":6,"Answer":"You get an out of memory error because you run out of memory, not because there is a limit on the number of columns.","Q_Score":1,"Tags":"python,pandas","A_Id":19221774,"CreationDate":"2013-10-07T09:52:00.000","Title":"How many columns in pandas, python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a mac server and I have both FINK and macport installation of python\/numpy\/scipy\nI was wondering if having both will affect the other? In terms of memory leaks\/unusual results? \nIn case you are wondering why both ? Well I like FINK but macports allows me to have python2.4 which FINK does not provide (yes I needed an old version for a piece of code I have)\nI wonder this since I tried to use homebrew once and it complained about the machine having port and FINK (I did not realize that port provided python2.4 so was looking at homebrew but when I realized port did give 2.4 I abandoned it)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":421,"Q_Id":19228380,"Users Score":0,"Answer":"In terms of how your Python interpreter works, no: there is no negative effect on having Fink Python as well as MacPorts Python installed on the same machine, just as there is no effect from having multiple installations of Python by anything.","Q_Score":0,"Tags":"python,macos,port,fink","A_Id":19228974,"CreationDate":"2013-10-07T15:10:00.000","Title":"macport and FINK","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know in a linked list there are a head node and a tail node. Well, for my data structures assignment, we are suppose to create a linked matrix with references to a north, south, east, and west node. I am at a loss of how to implement this. A persistent problem that bothers me is the head node and tail node. The user inputs the number of rows and the number of columns. Should I have multiple head nodes then at the beginning of each row and multiple tail nodes at the end of each row? If so, should I store the multiple head\/tail nodes in a list? \nThank you.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1333,"Q_Id":19234950,"Users Score":0,"Answer":"There's more than one way to interpret this, but one option is:\nHave a single \"head\" node at the top-left corner and a \"tail\" node at the bottom-right. There will then be row-head, row-tail, column-head, and column-tail nodes, but these are all accessible from the overall head and tail, so you don't need to keep track of them, and they're already part of the linked matrix, so they don't need to be part of a separate linked list.\n(Of course a function that builds up an RxC matrix of zeroes will probably have local variables representing the current row's head\/tail, but that's not a problem.)","Q_Score":0,"Tags":"python,list,matrix,linked-list","A_Id":19235077,"CreationDate":"2013-10-07T21:17:00.000","Title":"Linked Matrix Implementation in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know in a linked list there are a head node and a tail node. Well, for my data structures assignment, we are suppose to create a linked matrix with references to a north, south, east, and west node. I am at a loss of how to implement this. A persistent problem that bothers me is the head node and tail node. The user inputs the number of rows and the number of columns. Should I have multiple head nodes then at the beginning of each row and multiple tail nodes at the end of each row? If so, should I store the multiple head\/tail nodes in a list? \nThank you.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1333,"Q_Id":19234950,"Users Score":0,"Answer":"It really depends on what options you want\/need to efficiently support.\nFor instance, a singly linked list with only a head pointer can be a stack (insert and remove at the head). If you add a tail pointer you can insert at either end, but only remove at the head (stack or queue). A doubly linked list can support insertion or deletion at either end (deque). If you try to implement an operation that your data structure is not designed for you incur an O(N) penalty.\nSo I would start with a single pointer to the (0,0) element and then start working on the operations your instructor asks for. You may find you need additional pointers, you may not. My guess would be that you will be fine with a single head pointer.","Q_Score":0,"Tags":"python,list,matrix,linked-list","A_Id":19237061,"CreationDate":"2013-10-07T21:17:00.000","Title":"Linked Matrix Implementation in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset of time-series examples. I want to calculate the similarity between various time-series examples, however I do not want to take into account differences due to scaling (i.e. I want to look at similarities in the shape of the time-series, not their absolute value). So, to this end, I need a way of normalizing the data. That is, making all of the time-series examples fall between a certain region e.g [0,100]. Can anyone tell me how this can be done in python","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":20885,"Q_Id":19256930,"Users Score":11,"Answer":"The solutions given are good for a series that aren\u2019t incremental nor decremental(stationary). In financial time series( or any other series with a a bias) the formula given is not right. It should, first be detrended or perform a scaling based in the latest 100-200 samples.\nAnd if the time series doesn't come from a normal distribution ( as is the case in finance) there is advisable to apply a non linear function ( a standard CDF funtion for example) to compress the outliers.\nAronson and Masters book (Statistically sound Machine Learning for algorithmic trading) uses the following formula ( on 200 day chunks ): \nV = 100 * N ( 0.5( X -F50)\/(F75-F25)) -50 \nWhere:\n X : data point\nF50 : mean of the latest 200 points\nF75 : percentile 75\nF25 : Percentile 25\n N : normal CDF","Q_Score":9,"Tags":"python,time-series","A_Id":43874003,"CreationDate":"2013-10-08T19:46:00.000","Title":"Python - how to normalize time-series data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset of time-series examples. I want to calculate the similarity between various time-series examples, however I do not want to take into account differences due to scaling (i.e. I want to look at similarities in the shape of the time-series, not their absolute value). So, to this end, I need a way of normalizing the data. That is, making all of the time-series examples fall between a certain region e.g [0,100]. Can anyone tell me how this can be done in python","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":20885,"Q_Id":19256930,"Users Score":0,"Answer":"I'm not going to give the Python code, but the definition of normalizing, is that for every value (datapoint) you calculate \"(value-mean)\/stdev\". Your values will not fall between 0 and 1 (or 0 and 100) but I don't think that's what you want. You want to compare the variation. Which is what you are left with if you do this.","Q_Score":9,"Tags":"python,time-series","A_Id":21486466,"CreationDate":"2013-10-08T19:46:00.000","Title":"Python - how to normalize time-series data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I usually have to rerun (most parts of) a notebook when reopen it, in order to get access to previously defined variables and go on working. \nHowever, sometimes I'd like to skip some of the cells, which have no influence to subsequent cells (e.g., they might comprise a branch of analysis that is finished) and could take very long time to run. These cells can be scattered throughout the notebook, so that something like \"Run All Below\" won't help much.\nIs there a way to achieve this?\nIdeally, those cells could be tagged with some special flags, so that they could be \"Run\" manually, but would be skipped when \"Run All\".\nEDIT\n%%cache (ipycache extension) as suggested by @Jakob solves the problem to some extent.\nActually, I don't even need to load any variables (which can be large but unnecessary for following cells) when re-run, only the stored output matters as analyzing results.\nAs a work-around, put %%cache folder\/unique_identifier to the beginning of the cell. The code will be executed only once and no variables will be loaded when re-run unless you delete the unique_identifier file.\nUnfortunately, all the output results are lost when re-run with %%cache...\nEDIT II (Oct 14, 2013)\nThe master version of ipython+ipycache now pickles (and re-displays) the codecell output as well.\nFor rich display outputs including Latex, HTML(pandas DataFrame output), remember to use IPython's display() method, e.g., display(Latex(r'$\\alpha_1$'))","AnswerCount":7,"Available Count":1,"Score":0.057080742,"is_accepted":false,"ViewCount":40298,"Q_Id":19309287,"Users Score":2,"Answer":"The simplest way to skip python code in jupyter notebook cell from running, I temporarily convert those cells to markdown.","Q_Score":61,"Tags":"python,ipython,ipython-notebook,ipython-magic","A_Id":64969412,"CreationDate":"2013-10-11T02:25:00.000","Title":"How to (intermittently) skip certain cells when running IPython notebook?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to visualize \"Words\/grams\" used in columns of TfidfVectorizer outut in python-scikit library . Is there a way ?\nI tried to to convert csr to array , but cannot see header composed of grams.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":31,"Q_Id":19316788,"Users Score":1,"Answer":"Use get_feature_names method as specified in comments by larsmans","Q_Score":1,"Tags":"python,scikit-learn,tf-idf","A_Id":19317456,"CreationDate":"2013-10-11T11:15:00.000","Title":"Is there a way to see column 'grams' of the TfidfVectoririzer output?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 2 column array, 1st column weights and the 2 nd column values which I am plotting using python. I would like to draw 20 samples from this weighted array, proportionate to their weights. Is there a python\/numpy command which does that?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":1324,"Q_Id":19352225,"Users Score":-1,"Answer":"You need to refine your problem statement better. For example, if your array has only 1 row, what do you expect. If your array has 20,000 rows what do you expect? ...","Q_Score":2,"Tags":"python,numpy,histogram,sample","A_Id":19352360,"CreationDate":"2013-10-14T01:23:00.000","Title":"Sample from weighted histogram","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a data structure that preserves the order of its elements (which may change over the life of the data structure, as the client may move elements around).\nIt should allow fast search, insertion before\/after a given element, removal of a given element, lookup of the first and last elements, and bidirectional iteration starting at a given element.\nWhat would be a good implementation? \nHere's my first attempt:\nA class deriving from both collections.abc.Iterable and collections.abc.MutableSet that contains a linked list and a dictionary. The dictionary's keys are elements, values are nodes in the linked list. The dictionary would handle search for a node given an element. Once an element is found, the linked list would handle insertion before\/after, deletion, and iteration. The dictionary would be updated by adding or deleting the relevant key\/value pair. Clearly, with this approach the elements must be hashable and unique (or else, we'll need another layer of indirection where each element is represented by an auto-assigned numeric identifier, and only those identifiers are stored as keys).\nIt seems to me that this would be strictly better in asymptotic complexity than either list or collections.deque, but I may be wrong. [EDIT: Wrong, as pointed out by @roliu. Unlike list or deque, I would not be able to find an element by its numeric index in O(1). As of now, it is O(N) but I am sure there's some way to make it O(log N) if necessary.]","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":708,"Q_Id":19355986,"Users Score":1,"Answer":"Using doubly-linked lists in Python is a bit uncommon. However, your own proposed solution of a doubly-linked list and a dictionary has the correct complexity: all the operations you ask for are O(1).\nI don't think there is in the standard library a more direct implementation. Trees might be nice theoretically, but also come with drawbacks, like O(log n) or (precisely) their general absence from the standard library.","Q_Score":1,"Tags":"python,data-structures,python-3.x,deque","A_Id":19356586,"CreationDate":"2013-10-14T08:15:00.000","Title":"How can I implement a data structure that preserves order and has fast insertion\/removal?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Most languages have a NaN constant you can use to assign a variable the value NaN. Can python do this without using numpy?","AnswerCount":6,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":180714,"Q_Id":19374254,"Users Score":7,"Answer":"You can do float('nan') to get NaN.","Q_Score":116,"Tags":"python,constants,nan","A_Id":19374300,"CreationDate":"2013-10-15T06:02:00.000","Title":"Assigning a variable NaN in python without numpy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to efficiently merge two (fairly arbitrary) data structures: one representing a set of defaults values and one representing overrides. Example data below. (Naively iterating over the structures works, but is very slow.) Thoughts on the best approach for handling this case?\n\n\n_DEFAULT = { 'A': 1122, 'B': 1133, 'C': [ 9988, { 'E': [ { 'F': 6666, }, ], }, ], }\n\n_OVERRIDE1 = { 'B': 1234, 'C': [ 9876, { 'D': 2345, 'E': [ { 'F': 6789, 'G': 9876, }, 1357, ], }, ], }\n_ANSWER1 = { 'A': 1122, 'B': 1234, 'C': [ 9876, { 'D': 2345, 'E': [ { 'F': 6789, 'G': 9876, }, 1357, ], }, ], }\n\n_OVERRIDE2 = { 'C': [ 6543, { 'E': [ { 'G': 9876, }, ], }, ], }\n_ANSWER2 = { 'A': 1122, 'B': 1133, 'C': [ 6543, { 'E': [ { 'F': 6666, 'G': 9876, }, ], }, ], }\n\n_OVERRIDE3 = { 'B': 3456, 'C': [ 1357, { 'D': 4567, 'E': [ { 'F': 6677, 'G': 9876, }, 2468, ], }, ], }\n_ANSWER3 = { 'A': 1122, 'B': 3456, 'C': [ 1357, { 'D': 4567, 'E': [ { 'F': 6677, 'G': 9876, }, 2468, ], }, ], }\n\n\nThis is an example of how to run the tests:\n(The dictionary update doesn't work, just an stub function.)\n\n\n import itertools\n\n def mergeStuff( default, override ):\n # This doesn't work\n result = dict( default )\n result.update( override )\n return result\n\n def main():\n for override, answer in itertools.izip( _OVERRIDES, _ANSWERS ):\n result = mergeStuff(_DEFAULT, override)\n print('ANSWER: %s' % (answer) )\n print('RESULT: %s\\n' % (result) )","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1800,"Q_Id":19378143,"Users Score":0,"Answer":"If you know one structure is always a subset of the other, then just iterate the superset and in O(n) time you can check element by element whether it exists in the subset and if it doesn't, put it there. As far as I know there's no magical way of doing this other than checking it manually element by element. Which, as I said, is not bad as it can be done in with O(n) complexity.","Q_Score":3,"Tags":"python","A_Id":19378238,"CreationDate":"2013-10-15T09:52:00.000","Title":"Python: Merging two arbitrary data structures","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to draw the curve a generic cubic function using matplotlib. I want to draw curves that are defined by functions such as: x^3 + y^3 + y^2 + 2xy^2 = 0. Is this possible to do?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2800,"Q_Id":19422749,"Users Score":0,"Answer":"my 2 cents:\nx^3+y^3+y^2+2xy^2=0\ny^2=-x^3-y^3-2xy^2\ny^2>0 => -x^3-y^3-2xy^2>0 => x^3+y^3+2xy^2<0 =>\nx(x^2+2y^2)+y^3<0 => x(x^2+2y^2)<-y^3 => (x^2+2y^2)<-y^3\/x\n0<(x^2+2y^2) => 0<-y^3\/x => 0>y^3\/x =>\n(x>0 && y<0) || (x<0 && y>0)\nyour graph will span across the 2nd and 4th quadrants","Q_Score":2,"Tags":"python,matplotlib","A_Id":19423078,"CreationDate":"2013-10-17T09:22:00.000","Title":"how to draw a nonlinear function using matplotlib?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I used the Python Pandas library as a wrap-around instead of using SQL. Everything worked perfectly, except when I open the output excel file, the cells appear blank, but when I click on the cell, I can see the value in the cell above. Additionally, Python and Stata recognize the value in the cell, even though the eye cannot see it. Furthermore, if I do \"text to columns\", then the values in the cell become visible to the eye.\nClearly it's a pain to go through every column and click \"text to columns\", and I'm wondering the following:\n(1) Why is the value not visible to the eye when it exists in the cell?\n(2) What's the easiest way to make all the values visible to the eye aside from the cumbersome \"text to columns\" for all columns approach?\n(3) I did a large number of tests to make sure the non-visible values in the cells in fact worked in analysis. Is my assumption that the non-visible values in the cells will always be accurate, true? \nThanks in advance for any help you can provide!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":311,"Q_Id":19436220,"Users Score":0,"Answer":"It sounds to me like your python code is inserting a carriage return either before or after the value.\nI've replicated this behavior in Excel 2016 and can confirm that the cell appears blank, but does contain a value.\nFurthermore, I've verified that using the text to columns will parse the carriage return out.","Q_Score":2,"Tags":"python,excel,pandas","A_Id":41857586,"CreationDate":"2013-10-17T20:06:00.000","Title":"Python Pandas Excel Display","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a reset_index equivalent for the column headings? In other words, if the column names are an MultiIndex, how would I drop one of the levels?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4607,"Q_Id":19474693,"Users Score":0,"Answer":"Transpose df, reset index, and transopse again.\ndf.T.reset_index().T","Q_Score":6,"Tags":"python,pandas","A_Id":58662181,"CreationDate":"2013-10-20T06:48:00.000","Title":"Pandas DataFrame.reset_index for columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two numpy arrays x and y, which have length 10,000.\nI would like to plot a random subset of 1,000 entries of both x and y.\nIs there an easy way to use the lovely, compact random.sample(population, k) on both x and y to select the same corresponding indices? (The y and x vectors are linked by a function y(x) say.)\nThanks.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":15029,"Q_Id":19485641,"Users Score":1,"Answer":"Using the numpy.random.randint function, you generate a list of random numbers, meaning that you can select certain datapoints twice.","Q_Score":29,"Tags":"python,random,numpy","A_Id":69479472,"CreationDate":"2013-10-21T03:04:00.000","Title":"Python random sample of two arrays, but matching indices","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I will have this random number generated e.g 12.75 or 1.999999999 or 2.65\nI want to always round this number down to the nearest integer whole number so 2.65 would be rounded to 2.\nSorry for asking but I couldn't find the answer after numerous searches, thanks :)","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":42133,"Q_Id":19501279,"Users Score":1,"Answer":"I'm not sure whether you want math.floor, math.trunc, or int, but... it's almost certainly one of those functions, and you can probably read the docs and decide more easily than you can explain enough for usb to decide for you.","Q_Score":10,"Tags":"python,integer,rounding","A_Id":19501335,"CreationDate":"2013-10-21T17:46:00.000","Title":"How do I ONLY round a number\/float down in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Basically I want to reshape tensors represented by numpy.ndarray.\nFor example, I want to do something like this (latex notation)\nA_{i,j,k,l,m,n,p} -> A_{i,jk,lm,np}\nor\nA_{i,j,k,l,m,n,p} -> A_{ij,k,l,m,np}\nwhere A is an ndarray. i,j,k,... denotes the original axes.\nso the new axis 2 becomes the \"flattened\" version of axis 2 and 3, etc. If I simply use numpy.reshape, I don't think it knows what axes I want to merge, so it seems ambiguous and error prone.\nIs there any neat way of doing this rather than creating another ndarray manually?","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":802,"Q_Id":19509314,"Users Score":4,"Answer":"Using reshape is never ambiguous. It doesn't change the memory-layout of the data.\nIndexing is always done using the strides determined by the shape.\nThe right-most axis has stride 1, while the axes to the left have strides given by the product of the sizes to their right.\nThat means for you: as long as you collect neighboring axes, it will do the \"right\" thing.","Q_Score":4,"Tags":"python,numpy,reshape","A_Id":19510028,"CreationDate":"2013-10-22T05:00:00.000","Title":"How to merge specific axes without ambuigity with numpy.ndarray","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I'm using Opencv and I want to find the n most common colors of an image using x sensitivity. How could I do this? Are there any opencv functions to do this? \nCheers!\n*Note: this isn't homework, i'm just using opencv for fun!","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":4153,"Q_Id":19524905,"Users Score":4,"Answer":"I would transform the images to the HSV color space and then compute a histogram of the H values. Then, take the bins with the largest values.","Q_Score":3,"Tags":"python,opencv","A_Id":19540052,"CreationDate":"2013-10-22T17:50:00.000","Title":"Get most common colours in an image using OpenCV","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"baseline - I have CSV data with 10,000 entries. I save this as 1 csv file and load it all at once. \nalternative - I have CSV data with 10,000 entries. I save this as 10,000 CSV files and load it individually. \nApproximately how much more inefficient is this computationally. I'm not hugely interested in memory concerns. The purpose of the alternative method is because I frequently need to access subsets of the data and don't want to have to read the entire array. \nI'm using python. \nEdit: I can other file formats if needed. \nEdit1: SQLite wins. Amazingly easy and efficient compared to what I was doing before.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1356,"Q_Id":19532159,"Users Score":1,"Answer":"I would write all the lines to one file. For 10,000 lines it's probably not worthwhile, but you can pad all the lines to the same length - say 1000 bytes.\nThen it's easy to seek to the nth line, just multiply n by the line length","Q_Score":0,"Tags":"python,sqlite,csv","A_Id":19532207,"CreationDate":"2013-10-23T03:18:00.000","Title":"Most efficient way to store data on drive","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to calculate eigenvalues and eigenvectors in python. numpy and scipy do not work. They both write Illegal instruction (core dumped). I found out that to resolve the problem I need to check my blas\/lapack. So, I thought that may be an easier way is to write\/find a small function to solve the eigenvalue problem. Does anybody know if such solutions exist?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12872,"Q_Id":19582197,"Users Score":0,"Answer":"Writing a program to solve an eigenvalue problem is about 100 times as much work as fixing the library mismatch problem.","Q_Score":3,"Tags":"python,linear-algebra","A_Id":19650488,"CreationDate":"2013-10-25T06:02:00.000","Title":"How to find eigenvectors and eigenvalues without numpy and scipy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to use scikits.bvp_solver in python.\nI currently use Canopy as my standard Python interface, where this package isn't available. Is there another available package for solving boundary value problems? I have also tried downloading using macports but the procedure sticks when it tries building gcc48 dependency.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":611,"Q_Id":19591907,"Users Score":0,"Answer":"You can try to download the package tar.gz and use easy_install . Or you can unpack the package and use the standard way of python setup.py install. I believe both ways require a fortran compiler.","Q_Score":0,"Tags":"python,scipy,enthought,scikits,canopy","A_Id":19674497,"CreationDate":"2013-10-25T13:59:00.000","Title":"Installing scikits.bvp_solver","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to write a script in python which given coordinates of 2 points in 3d space finds a collinear point in distane 1 unit from one the given points. This third point must lay between those two given.\nI think I will manage with scripting but I am not really sure how to calculate it from mathematical point of view. I found some stuff on google, but they do not answer my question.\nThanks for any advice.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1351,"Q_Id":19611177,"Users Score":2,"Answer":"Given 2 points, (x1,y1,z1) and (x2,y2,z2), you can take the difference between the two, so you end up with (x2-x1,y2-y1,z2-z1). Take the norm of this (i.e. take the distance between the original 2 points), and divide (x2-x1,y2-y1,z2-z1) by that value. You now have a vector with the same slope as the line between the first 2 points, but it has magnitude one, since you normalized it (by dividing by its magnitude). Then add\/subtract that vector to one of the original points to get your final answer.","Q_Score":2,"Tags":"python,algorithm,3d,point","A_Id":19611533,"CreationDate":"2013-10-26T19:52:00.000","Title":"Find a point in 3d collinear with 2 other points","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to install scikit-learn but this library needs scipy and numpy too.\nI tried to add them on the setup.py but I had an error with numpy. I handle to install scikit-learn and numpy from virtenv, but I cannot install scipy.\nI tried pip install scipy. The procedure finished without any problem but there isn't any scipy folder on site-packages.\nAlso, I tried to add only scipy on setup.py. The same as above. The procedure finished without an error but scipy isn't there.\nAny help?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":651,"Q_Id":19619253,"Users Score":0,"Answer":"You will probably find more info sshing into your app ad typing tail_all.","Q_Score":1,"Tags":"python,scipy,openshift","A_Id":19621115,"CreationDate":"2013-10-27T14:37:00.000","Title":"cannot install scipy on openshift","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an iPython notebook that contains an inline plot (i.e. it contains the command plot(x,y)). When I issue the command ipython nbconvert --to latex --post PDF --SphinxTransformer.author='Myself' MyNotebook.ipynb the resulting .PDF file contains the figure, but it has been exported to .PNG, so it doesn't look very good (pixelated). How can I tell nbconvert to export all plots\/figures to .EPS instead?\nThank you","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5738,"Q_Id":19659864,"Users Score":0,"Answer":"NBconvert does not run your code. So if you haven't plotted with SVG matplotlib backend it is not possible.\nIf you did so, then you need to write a nbconvert preprocessor that does svg-> eps and extend the relevant template to know how to embed EPS.","Q_Score":10,"Tags":"ipython","A_Id":19662546,"CreationDate":"2013-10-29T13:40:00.000","Title":"iPython nbconvert and latex: use .eps instead of .png for plots","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run CCA for a multi label\/text classification problem but keep getting following warning and an error which I think are related \n\nwarnings.warn('Maximum number of iterations reached')\n \/Library\/Python\/2.7\/site-packages\/sklearn\/cross_decomposition\/pls_.py:290:\n UserWarning: X scores are null at iteration 0 warnings.warn('X\n scores are null at iteration %s' % k)\nwarnings.warn('Maximum number of iterations reached')\n \/Library\/Python\/2.7\/site-packages\/sklearn\/cross_decomposition\/pls_.py:290:\n UserWarning: X scores are null at iteration 1\n\nwarnings.warn('X scores are null at iteration %s' % k)\n...\nfor all the 400 iterations and then following error at the end which I think is a side effect of above warning:\n\nTraceback (most recent call last): File \"scikit_fb3.py\", line 477,\n in \n getCCA(shorttestfilepathPreProcessed) File \"scikit_fb3.py\", line 318, in getCCA\n X_CCA = cca.fit(x_array, Y_indicator).transform(X) File \"\/Library\/Python\/2.7\/site-packages\/sklearn\/cross_decomposition\/pls_.py\",\n line 368, in transform\n Xc = (np.asarray(X) - self.x_mean_) \/ self.x_std_ File \"\/usr\/local\/bin\/src\/scipy\/scipy\/sparse\/compressed.py\", line 389, in\n sub\n raise NotImplementedError('adding a nonzero scalar to a ' NotImplementedError: adding a nonzero scalar to a sparse matrix is not\n supported\n\nWhat could possibly be wrong?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":877,"Q_Id":19673279,"Users Score":0,"Answer":"CCA doesn't support sparse matrices. By default, you should assume scikit-learn estimators do not grok sparse matrices and check their docstrings to find out if by chance you found one that does.\n(I admit the warning could have been friendlier.)","Q_Score":0,"Tags":"python,classification,scikit-learn","A_Id":19696243,"CreationDate":"2013-10-30T03:27:00.000","Title":"UserWarning: X scores are null at iteration","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example, if we have a numpy array A, and we want a numpy array B with the same elements.\nWhat is the difference between the following (see below) methods? When is additional memory allocated, and when is it not?\n\nB = A\nB[:] = A (same as B[:]=A[:]?)\nnumpy.copy(B, A)","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":76805,"Q_Id":19676538,"Users Score":155,"Answer":"All three versions do different things:\n\nB = A\nThis binds a new name B to the existing object already named A. Afterwards they refer to the same object, so if you modify one in place, you'll see the change through the other one too.\nB[:] = A (same as B[:]=A[:]?)\nThis copies the values from A into an existing array B. The two arrays must have the same shape for this to work. B[:] = A[:] does the same thing (but B = A[:] would do something more like 1).\nnumpy.copy(B, A)\nThis is not legal syntax. You probably meant B = numpy.copy(A). This is almost the same as 2, but it creates a new array, rather than reusing the B array. If there were no other references to the previous B value, the end result would be the same as 2, but it will use more memory temporarily during the copy.\nOr maybe you meant numpy.copyto(B, A), which is legal, and is equivalent to 2?","Q_Score":127,"Tags":"python,arrays,numpy","A_Id":19676762,"CreationDate":"2013-10-30T07:44:00.000","Title":"Numpy array assignment with copy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example, if we have a numpy array A, and we want a numpy array B with the same elements.\nWhat is the difference between the following (see below) methods? When is additional memory allocated, and when is it not?\n\nB = A\nB[:] = A (same as B[:]=A[:]?)\nnumpy.copy(B, A)","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":76805,"Q_Id":19676538,"Users Score":33,"Answer":"B=A creates a reference\nB[:]=A makes a copy\nnumpy.copy(B,A) makes a copy\n\nthe last two need additional memory.\nTo make a deep copy you need to use B = copy.deepcopy(A)","Q_Score":127,"Tags":"python,arrays,numpy","A_Id":19676652,"CreationDate":"2013-10-30T07:44:00.000","Title":"Numpy array assignment with copy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use scikit-learn to do some machine learning on natural language data. I've got my corpus transformed into bag-of-words vectors (which take the form of a sparse CSR matrix) and I'm wondering if there's a supervised dimensionality reduction algorithm in sklearn capable of taking high-dimensional, supervised data and projecting it into a lower dimensional space which preserves the variance between these classes. \nThe high-level problem description is that I have a collection of documents, each of which can have multiple labels on it, and I want to predict which of those labels will get slapped on a new document based on the content of the document.\nAt it's core, this is a supervised, multi-label, multi-class problem using a sparse representation of BoW vectors. Is there a dimensionality reduction technique in sklearn that can handle that sort of data? Are there other sorts of techniques people have used in working with supervised, BoW data in scikit-learn?\nThanks!","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4068,"Q_Id":19714108,"Users Score":0,"Answer":"Use a multi-layer neural net for classification. If you want to see what the representation of the input is in the reduced dimension, look at the activations of the hidden layer. The role of the hidden layer is by definition optimised to distinguish between the classes, since that's what's directly optimised when the weights are set.\nYou should remember to use a softmax activation on the output layer, and something non-linear on the hidden layer (tanh or sigmoid).","Q_Score":12,"Tags":"python,machine-learning,scikit-learn,dimensionality-reduction","A_Id":19724359,"CreationDate":"2013-10-31T18:33:00.000","Title":"Supervised Dimensionality Reduction for Text Data in scikit-learn","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use scikit-learn to do some machine learning on natural language data. I've got my corpus transformed into bag-of-words vectors (which take the form of a sparse CSR matrix) and I'm wondering if there's a supervised dimensionality reduction algorithm in sklearn capable of taking high-dimensional, supervised data and projecting it into a lower dimensional space which preserves the variance between these classes. \nThe high-level problem description is that I have a collection of documents, each of which can have multiple labels on it, and I want to predict which of those labels will get slapped on a new document based on the content of the document.\nAt it's core, this is a supervised, multi-label, multi-class problem using a sparse representation of BoW vectors. Is there a dimensionality reduction technique in sklearn that can handle that sort of data? Are there other sorts of techniques people have used in working with supervised, BoW data in scikit-learn?\nThanks!","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4068,"Q_Id":19714108,"Users Score":0,"Answer":"Try ISOMAP. There's a super simple built-in function for it in scikits.learn. Even if it doesn't have some of the preservation properties you're looking for, it's worth a try.","Q_Score":12,"Tags":"python,machine-learning,scikit-learn,dimensionality-reduction","A_Id":19714792,"CreationDate":"2013-10-31T18:33:00.000","Title":"Supervised Dimensionality Reduction for Text Data in scikit-learn","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a large 2d numpy array, I would like to remove a range of rows, say rows 10000:10010 efficiently. I have to do this multiple times with different ranges, so I would like to also make it parallelizable.\nUsing something like numpy.delete() is not efficient, since it needs to copy the array, taking too much time and memory. Ideally I would want to do something like create a view, but I am not sure how I could do this in this case. A masked array is also not an option since the downstream operations are not supported on masked arrays.\nAny ideas?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2948,"Q_Id":19719746,"Users Score":3,"Answer":"Because of the strided data structure that defines a numpy array, what you want will not be possible without using a masked array. Your best option might be to use a masked array (or perhaps your own boolean array) to mask the deleted the rows, and then do a single real delete operation of all the rows to be deleted before passing it downstream.","Q_Score":8,"Tags":"python,numpy","A_Id":19719936,"CreationDate":"2013-11-01T02:07:00.000","Title":"How can one efficiently remove a range of rows from a large numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I convert 2D cvMat to 1D? I have tried converting 2D cvMat to Numpy array then used ravel() (I want that kind of resultant matrix).When I tried converting it back to\n cvMat using cv.fromarray() it gives an error that the matrix must be 2D or 3D.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":606,"Q_Id":19732097,"Users Score":0,"Answer":"Use matrix.reshape((-1, 1)) to turn the n-element 1D matrix into an n-by-1 2D one before converting it.","Q_Score":0,"Tags":"python,opencv,numpy","A_Id":19732333,"CreationDate":"2013-11-01T17:34:00.000","Title":"Conversion of 2D cvMat to 1D","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 1 million 3d points I am passing to numpy.linalg.svd but it runs out of memory very quickly. Is there a way to break down this operation into smaller chunks?\nI don't know what it's doing but am I only supposed to pass arrays that represent a 3x3, 4x4 matrix? Because I have seen uses of it online where they were passing arrays with arbitrary number of elements.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4417,"Q_Id":19743525,"Users Score":0,"Answer":"Try to use scipy.linalg.svd instead of numpy's func.","Q_Score":5,"Tags":"python,numpy,matrix,linear-algebra,svd","A_Id":38743037,"CreationDate":"2013-11-02T15:32:00.000","Title":"Is there a way to prevent numpy.linalg.svd running out of memory?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can you tell me when to use these vectorization methods with basic examples? \nI see that map is a Series method whereas the rest are DataFrame methods. I got confused about apply and applymap methods though. Why do we have two methods for applying a function to a DataFrame? Again, simple examples which illustrate the usage would be great!","AnswerCount":11,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":436006,"Q_Id":19798153,"Users Score":12,"Answer":"Probably simplest explanation the difference between apply and applymap:\napply takes the whole column as a parameter and then assign the result to this column\napplymap takes the separate cell value as a parameter and assign the result back to this cell.\nNB If apply returns the single value you will have this value instead of the column after assigning and eventually will have just a row instead of matrix.","Q_Score":624,"Tags":"python,pandas,dataframe,vectorization","A_Id":37336872,"CreationDate":"2013-11-05T20:20:00.000","Title":"Difference between map, applymap and apply methods in Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using pandas to get hourly data from a dataset with fifteen minute sampling intervals. My problem using the resample('H', how='ohlc') method is that it provides values within that hour and I want the value closest to the hour. For instance, I would like to take a value sampled at 2:55 instead of one from 3:10, but can't figure out how to find the value that is closest if it occurs prior to the timestamp being evaluated against.\nAny help would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":630,"Q_Id":19799893,"Users Score":0,"Answer":"I suppose you could create another column that is the Hour and subtract the time in question and get the absolute (unsigned) value that you could then do the min function on.. It is not code but I think the logic is right (or at least close.. after you find the mins,, then you can select them and then do your resample..","Q_Score":1,"Tags":"python,pandas","A_Id":19800482,"CreationDate":"2013-11-05T21:59:00.000","Title":"Using Pandas to get the closest value to a timestamp","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a numpy array, filtered__rows, comprised of LAS data [x, y, z, intensity, classification]. I have created a cKDTree of points and have found nearest neighbors, query_ball_point, which is a list of indices for the point and its neighbors.\nIs there a way to filter filtered__rows to create an array of only points whose index is in the list returned by query_ball_point?","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89890,"Q_Id":19821425,"Users Score":0,"Answer":"The fastest way to do this is X[tuple(index.T)], where X is the ndarray with the elements and index is the ndarray of indices wished to be retrieved.","Q_Score":48,"Tags":"python,numpy,scipy,nearest-neighbor","A_Id":70856990,"CreationDate":"2013-11-06T19:50:00.000","Title":"How to filter numpy array by list of indices?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use the cv2.getAffineTransform(src,dst) function in openCV, but it crashes because my inputs are arrays containing 125 pairs of x,y coordinates and getAffineTransform wants its input to have three columns. Can I just concat a row full of zeros onto my array or is there a special transformation I should do?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":935,"Q_Id":19865347,"Users Score":3,"Answer":"No I think there is something else that is the problem. Docs say: cv2.getAffineTransform Calculates an affine transform from three pairs of the corresponding points.\nThe problem is you are giving it 125 pairs of points. It only wants 3 pairs of point correspondences. This is of course the number of correspondences needed to solve the linear system of equations. If you are looking to estimate an affine transformation from noisy correspondences then you will need to use something like weighted least squares or RANSAC. To estimate affine transform from noisy data with a prepackaged algorithm it looks like cv2.estimateRigidTransform might work setting fullAffine = True","Q_Score":1,"Tags":"python,opencv","A_Id":19867945,"CreationDate":"2013-11-08T17:44:00.000","Title":"Converting x,y coordinates into a format for getAffineTransform?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a better way to determine whether a variable in Pandas and\/or NumPy is numeric or not ? \nI have a self defined dictionary with dtypes as keys and numeric \/ not as values.","AnswerCount":10,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":132614,"Q_Id":19900202,"Users Score":2,"Answer":"Just to add to all other answers, one can also use df.info() to get whats the data type of each column.","Q_Score":129,"Tags":"python,pandas,numpy","A_Id":50679155,"CreationDate":"2013-11-11T06:32:00.000","Title":"How to determine whether a column\/variable is numeric or not in Pandas\/NumPy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there an easy way to use PyMC's MCMC algorithms to efficiently sample a parameter space for a frequentists analysis? I'm not interested in the point density (for Bayesian analysis), but rather want a fast and efficient way to sample a multidimensional parameter space, so I would like to trace all tested points (i.e. in particular also the rejected points), while recurring points need to be saved only once in the trace.\nI would be grateful for any helpful comments. \nBtw, thanks for developing PyMC, it is a great package!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":155,"Q_Id":19913421,"Users Score":0,"Answer":"You can create custom StepMethods to perform any kind of sampling you like. See the docs for how to create your own.","Q_Score":3,"Tags":"python,pymc","A_Id":19960199,"CreationDate":"2013-11-11T18:38:00.000","Title":"Using MCMC from PyMC as an efficient sampler for frequentist analysis?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Scikit-learn CountVectorizer for bag-of-words approach currently gives two sub-options: (a) use a custom vocabulary (b) if custom vocabulary is unavailable, then it makes a vocabulary based on all the words present in the corpus. \nMy question: Can we specify a custom vocabulary to begin with, but ensure that it gets updated when new words are seen while processing the corpus. I am assuming this is doable since the matrix is stored via a sparse representation. \nUsefulness: It will help in cases when one has to add additional documents to the training data, and one should not have to start from the beginning.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2027,"Q_Id":19945334,"Users Score":2,"Answer":"No, this is not possible at present. It's also not \"doable\", and here's why.\nCountVectorizer and TfidfVectorizer are designed to turn text documents into vectors. These vectors need to all have an equal number of elements, which in turn is equal to the size of the vocabulary, because that conventions is ingrained in all scikit-learn code. If the vocabulary is allowed to grow, then the vectors produced at various times have different lengths. This affects e.g. the number of parameters in a linear (or other parametric) classifiers trained on such vectors, which then also needs to be able to grow. It affects k-means and dimensionality reduction classes. It even affects something as simple as matrix multiplications, which can no longer be handled with a simple call to NumPy's dot routine, requiring custom code instead. In other words, allowing this flexibility in the vectorizers makes little sense unless you adapt all of scikit-learn to handle the result.\nWhile this would be possible, I (as a core scikit-learn developer) would strongly oppose the change because it makes the code very complicated, probably slower, and even if it would work, it would make it impossible to distinguish between a \"growing vocabulary\" and the much more common situation of a user passing data in the wrong way, so that the number of dimensions comes out wrong.\nIf you want to feed data in in batches, then either using a HashingVectorizer (no vocabulary) or do two passes over the data to collect the vocabulary up front.","Q_Score":5,"Tags":"python,numpy,scipy,scikit-learn,scikits","A_Id":24098425,"CreationDate":"2013-11-13T04:30:00.000","Title":"Adding new words to text vectorizer in scikit-learn","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to find a transformation matrix that relates 2 3D point clouds. According to the documentation, cv2.estimateAffine3D(src, dst) --> retval, out, inliers. \nsrc is the first 3D point set, dst is the second 3D point set.\nI'm assuming that retval is a boolean.\nOut is the 3x4 affine transformation matrix, and inliers is a vector.\nMy question is, what are the shapes of the input point sets? Do the points have to be homogeneous, i.e. 4xN?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":515,"Q_Id":20008825,"Users Score":1,"Answer":"As I can understand from source code, they have to be Point3D, i.e. non-homogenous.","Q_Score":0,"Tags":"python,opencv","A_Id":20009112,"CreationDate":"2013-11-15T19:14:00.000","Title":"OpenCV estimateAffine3D in Python: shapes of input matrix?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have data coming from a csv which has a few thousand columns and ten thousand (or so) rows. Within each column the data is of the same type, but different columns have data of different type*. Previously I have been pickling the data from numpy and storing on disk, but it's quite slow, especially because usually I want to load some subset of the columns rather than all of them.\nI want to put the data into hdf5 using pytables, and my first approach was to put the data in a single table, with one hdf5 column per csv column. Unfortunately this didn't work, I assume because of the 512 (soft) column limit.\nWhat is a sensible way to store this data?\n* I mean, the type of the data after it has been converted from text.","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":2271,"Q_Id":20045535,"Users Score":3,"Answer":"This might not, in fact, be possible to do in a naive way. HDF5 allocates 64 kb of space for meta-data for every data set. This meta data includes the types of the columns. So while the number of columns is a soft limit, somewhere in the 2-3 thousand range you typically run out of space to store the meta data (depending on the length of the column names, etc). \nFurthermore, doesn't numpy limit the number of columns to 32? How are you representing the data with numpy now? Anything that you can get into a numpy array should correspond to a pytables Array class.","Q_Score":11,"Tags":"python,numpy,hdf5,pytables","A_Id":20099740,"CreationDate":"2013-11-18T10:30:00.000","Title":"How to store wide tables in pytables \/ hdf5","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have data coming from a csv which has a few thousand columns and ten thousand (or so) rows. Within each column the data is of the same type, but different columns have data of different type*. Previously I have been pickling the data from numpy and storing on disk, but it's quite slow, especially because usually I want to load some subset of the columns rather than all of them.\nI want to put the data into hdf5 using pytables, and my first approach was to put the data in a single table, with one hdf5 column per csv column. Unfortunately this didn't work, I assume because of the 512 (soft) column limit.\nWhat is a sensible way to store this data?\n* I mean, the type of the data after it has been converted from text.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":2271,"Q_Id":20045535,"Users Score":1,"Answer":"you should be able to use pandas dataframe\nit can be saved to disk without converting to csv","Q_Score":11,"Tags":"python,numpy,hdf5,pytables","A_Id":20155746,"CreationDate":"2013-11-18T10:30:00.000","Title":"How to store wide tables in pytables \/ hdf5","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have data coming from a csv which has a few thousand columns and ten thousand (or so) rows. Within each column the data is of the same type, but different columns have data of different type*. Previously I have been pickling the data from numpy and storing on disk, but it's quite slow, especially because usually I want to load some subset of the columns rather than all of them.\nI want to put the data into hdf5 using pytables, and my first approach was to put the data in a single table, with one hdf5 column per csv column. Unfortunately this didn't work, I assume because of the 512 (soft) column limit.\nWhat is a sensible way to store this data?\n* I mean, the type of the data after it has been converted from text.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":2271,"Q_Id":20045535,"Users Score":1,"Answer":"IMHO it depends on what do you want to do with the data afterwards and how much of it do you need at one time. I had to build a program for statistical validation a while ago and we had two approaches:\n\nSplit the columns in separate tables (e.g. using a FK). The overhead of loading them is not too high\nTranspose the table, resulting in something like a key-value store, where the key is a tuple of (column, row)\n\nFor both we used postgres.","Q_Score":11,"Tags":"python,numpy,hdf5,pytables","A_Id":20240079,"CreationDate":"2013-11-18T10:30:00.000","Title":"How to store wide tables in pytables \/ hdf5","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The title may not be as explicit as I wish it would be but here is what I am trying to achieve:\nUsing Boost.Python, I expose a set of class\/functions to Python in the typical BOOST_PYTHON_MODULE(MyPythonModule) macro from C++ that produces MyPythonModule.pyd after compilation. I can now invoke a python script from C++ and play around with MyPythonModule without any issue (eg. create objects, call methods and use my registered converters). FYI: the converter I'm refering to is a numpy.ndarray to cv::Mat converter.\nThis works fine, but when I try to write a standalone Python script that uses MyPythonModule, my converters are not available. I tried to expose the C++ method that performs the converter registration to Python without any luck.\nIf my explanation isn't clear enough, don't hesitate to ask questions in the comments.\nThanks a lot for your help \/ suggestions.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":20055758,"Users Score":0,"Answer":"I found the problem... The prototype of my C++ function was taking cv::Mat& as argument and the converter was registered for cv::Mat without reference.\nThat was silly.","Q_Score":0,"Tags":"python,boost,converters","A_Id":20058784,"CreationDate":"2013-11-18T19:05:00.000","Title":"Boost.Python: Converters unavailable from standalone python script","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way of a procedure similar to plt.gca() to get a handle to the current axes. I first do a=mlab.surf(x, y, u2,warp_scale='auto')\nand then\nb=mlab.plot3d(yy, yy, (yy-40)**2 ,tube_radius=20.0)\nbut the origin of a and b are different and the plot looks incorrect. So I want to put b into the axes of a\nIn short, what would be the best way in mayavi to draw a surface and a line on same axes?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":501,"Q_Id":20062512,"Users Score":1,"Answer":"What you are expecting to be able to do from your matplotlib experience is not how mayavi axes work. In matplotlib the visualization is a child of the axes and the axes determines its coordinates. In mayavi or vtk, visualization sources consist of points in space. Axes are objects that surround a source and provide tick markings of the coordinate extent of those objects, that are not necessary for the visualizations, and where they exist they are children of sources.","Q_Score":0,"Tags":"python,mayavi","A_Id":20076370,"CreationDate":"2013-11-19T03:21:00.000","Title":"mayavi mlab get current axes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Open CV 2.4.6 with C++ (with Python sometimes too but it is irrelevant). I would like to know if there is a simple way to get all the available frame sizes from a capture device?\nFor example, my webcam can provide 640x480, 320x240 and 160x120. Suppose that I don't know about these frame sizes a priori... Is it possible to get a vector or an iterator, or something like this that could give me these values?\nIn other words, I don't want to get the current frame size (which is easy to obtain) but the sizes I could set the device to.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1874,"Q_Id":20081818,"Users Score":1,"Answer":"When you retrieve a frame from a camera, it is the maximum size that that camera can give. If you want a smaller image, you have to specify it when you get the image, and opencv will resize it for you.\nA normal camera has one sensor of one size, and it sends one kind of image to the computer. What opencv does with it thereafter is up to you to specify.","Q_Score":1,"Tags":"c++,python,opencv,video-capture,image-capture","A_Id":20085854,"CreationDate":"2013-11-19T20:55:00.000","Title":"Getting all available frame size from capture device with OpenCV","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Pandas dataframe and I want to find all the unique values in that dataframe...irrespective of row\/columns. If I have a 10 x 10 dataframe, and suppose they have 84 unique values, I need to find them - Not the count.\nI can create a set and add the values of each rows by iterating over the rows of the dataframe. But, I feel that it may be inefficient (cannot justify that). Is there an efficient way to find it? Is there a predefined function?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":90919,"Q_Id":20084382,"Users Score":7,"Answer":"Or you can use:\ndf.stack().unique()\nThen you don't need to worry if you have NaN values, as they are excluded when doing the stacking.","Q_Score":51,"Tags":"python,pandas,dataframe","A_Id":42714576,"CreationDate":"2013-11-19T23:26:00.000","Title":"Find unique values in a Pandas dataframe, irrespective of row or column location","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am faced with this problem:\nI have to build an FFNN that has to approximate an unknown function f:R^2 -> R^2. The data in my possession to check the net is a one-dimensional R vector. I know the function g:R^2->R that will map the output of the net into the space of my data. So I would use the neural network as a filter against bias in the data. But I am faced with two problems:\nFirstly, how can I train my network in this way?\nSecondly, I am thinking about adding an extra hidden layer that maps R^2->R and lets the net train itself to find the correct maps and then remove the extra layer. Would this algorithm be correct? Namely, would the output be the same that I was looking for?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":203,"Q_Id":20179255,"Users Score":1,"Answer":"Your idea with additional layer is good, although the problem is, that your weights in this layer have to be fixed. So in practise, you have to compute the partial derivatives of your R^2->R mapping, which can be used as the error to propagate through your network during training. Unfortunately, this may lead to the well known \"vanishing gradient problem\" which stopped the development of NN for many years.\nIn short - you can either manually compute the partial derivatives, and given expected output in R, simply feed the computed \"backpropagated\" errors to the network looking for R^2->R^2 mapping or as you said - create additional layer, and train it normally, but you will have to make the upper weights constant (which will require some changes in the implementation).","Q_Score":0,"Tags":"python,algorithm,neural-network,pybrain","A_Id":20200810,"CreationDate":"2013-11-24T18:35:00.000","Title":"Train a feed forward neural network indirectly","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a data set comprising a vector of features, and a target - either 1.0 or 0.0 (representing two classes). If I fit a RandomForestRegressor and call its predict function, is it equivalent to using RandomForestClassifier.predict_proba()?\nIn other words if the target is 1.0 or 0.0 does RandomForestRegressor output probabilities?\nI think so, and the results I a m getting suggest so, but I would like to get a second opinion...\nThanks\nWeasel","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4377,"Q_Id":20179267,"Users Score":4,"Answer":"There is a major conceptual diffrence between those, based on different tasks being addressed:\nRegression: continuous (real-valued) target variable. \nClassification: discrete target variable (classes).\nFor a general classification method, term probability of observation being class X may be not defined, as some classification methods, knn for example, do not deal with probabilities.\nHowever for Random Forest (and some other classification methods), classification is reduced to regression of classes probabilities destibution. Predicted class is taked then as argmax of computed \"probabilities\". In your case, you feed the same input, you get the same result. And yes, it is ok to treat values returned by RandomForestRegressor as probabilities.","Q_Score":4,"Tags":"python,scikit-learn","A_Id":20180148,"CreationDate":"2013-11-24T18:36:00.000","Title":"using RandomForestClassifier.predict_proba vs RandomForestRegressor.predict","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using nbconvert to produce something as close as possible to a polished journal article. \nI have successfully hidden input code using a custom nbconvert template. The doc is now looking very nice. \nBut I don't know how to suppress the bright red 'out[x]' statement in the top left corner of the output cells. Anyone know of any settings or hacks that are able to remove this also ?\nThanks, \nJohn","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":5150,"Q_Id":20184994,"Users Score":5,"Answer":"%%HTML\n